WorldWideScience

Sample records for extreme scale computing

  1. Extreme Scale Computing Studies

    Science.gov (United States)

    2010-12-01

    systems that would fall under the Exascale rubric . In this chapter, we first discuss the attributes by which achievement of the label “Exascale” may be...Carrington, and E. Strohmaier. A Genetic Algorithms Approach to Modeling the Performance of Memory-bound Computations. Reno, NV, November 2007. ACM/IEEE... genetic stochasticity (random mating, mutation, etc). Outcomes are thus stochastic as well, and ecologists wish to ask questions like, “What is the

  2. Extreme Scale Computing to Secure the Nation

    Energy Technology Data Exchange (ETDEWEB)

    Brown, D L; McGraw, J R; Johnson, J R; Frincke, D

    2009-11-10

    absence of nuclear testing, a progam to: (1) Support a focused, multifaceted program to increase the understanding of the enduring stockpile; (2) Predict, detect, and evaluate potential problems of the aging of the stockpile; (3) Refurbish and re-manufacture weapons and components, as required; and (4) Maintain the science and engineering institutions needed to support the nation's nuclear deterrent, now and in the future'. This program continues to fulfill its national security mission by adding significant new capabilities for producing scientific results through large-scale computational simulation coupled with careful experimentation, including sub-critical nuclear experiments permitted under the CTBT. To develop the computational science and the computational horsepower needed to support its mission, SBSS initiated the Accelerated Strategic Computing Initiative, later renamed the Advanced Simulation & Computing (ASC) program (sidebar: 'History of ASC Computing Program Computing Capability'). The modern 3D computational simulation capability of the ASC program supports the assessment and certification of the current nuclear stockpile through calibration with past underground test (UGT) data. While an impressive accomplishment, continued evolution of national security mission requirements will demand computing resources at a significantly greater scale than we have today. In particular, continued observance and potential Senate confirmation of the Comprehensive Test Ban Treaty (CTBT) together with the U.S administration's promise for a significant reduction in the size of the stockpile and the inexorable aging and consequent refurbishment of the stockpile all demand increasing refinement of our computational simulation capabilities. Assessment of the present and future stockpile with increased confidence of the safety and reliability without reliance upon calibration with past or future test data is a long-term goal of the ASC program. This

  3. Extreme Scale Computing for First-Principles Plasma Physics Research

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Choogn-Seock [Princeton University

    2011-10-12

    World superpowers are in the middle of the “Computnik” race. US Department of Energy (and National Nuclear Security Administration) wishes to launch exascale computer systems into the scientific (and national security) world by 2018. The objective is to solve important scientific problems and to predict the outcomes using the most fundamental scientific laws, which would not be possible otherwise. Being chosen into the next “frontier” group can be of great benefit to a scientific discipline. An extreme scale computer system requires different types of algorithms and programming philosophy from those we have been accustomed to. Only a handful of scientific codes are blessed to be capable of scalable usage of today’s largest computers in operation at petascale (using more than 100,000 cores concurrently). Fortunately, a few magnetic fusion codes are competing well in this race using the “first principles” gyrokinetic equations.These codes are beginning to study the fusion plasma dynamics in full-scale realistic diverted device geometry in natural nonlinear multiscale, including the large scale neoclassical and small scale turbulence physics, but excluding some ultra fast dynamics. In this talk, most of the above mentioned topics will be introduced at executive level. Representative properties of the extreme scale computers, modern programming exercises to take advantage of them, and different philosophies in the data flows and analyses will be presented. Examples of the multi-scale multi-physics scientific discoveries made possible by solving the gyrokinetic equations on extreme scale computers will be described. Future directions into “virtual tokamak experiments” will also be discussed.

  4. Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Conrad, Patrick [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Bigoni, Daniele [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Parno, Matthew [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2017-06-09

    QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a history of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT

  5. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohr, Bernd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schulz, Martin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pasccci, Valerio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brunst, Holger [Dresden Univ. of Technology (Germany)

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  6. XVIS: Visualization for the Extreme-Scale Scientific-Computation Ecosystem Final Scientific/Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States); Maynard, Robert [Kitware, Inc., Clifton Park, NY (United States)

    2017-10-27

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. The XVis project brought together collaborators from predominant DOE projects for visualization on accelerators and combining their respective features into a new visualization toolkit called VTK-m.

  7. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Patrick [Oregon State Univ., Corvallis, OR (United States)

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  8. Extreme-Scale Computing Project Aims to Advance Precision Oncology | FNLCR Staging

    Science.gov (United States)

    Two government agencies and five national laboratories are collaborating to develop extremely high-performance computing capabilities that will analyze mountains of research and clinical data to improve scientific understanding of cancer, predict dru

  9. Extreme-Scale Computing Project Aims to Advance Precision Oncology | Poster

    Science.gov (United States)

    Two government agencies and five national laboratories are collaborating to develop extremely high-performance computing capabilities that will analyze mountains of research and clinical data to improve scientific understanding of cancer, predict drug response, and improve treatments for patients.

  10. Extreme-Scale Computing Project Aims to Advance Precision Oncology | FNLCR

    Science.gov (United States)

    Two government agencies and five national laboratories are collaborating to develop extremely high-performance computing capabilities that will analyze mountains of research and clinical data to improve scientific understanding of cancer, predict dru

  11. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY15 Q4.

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sewell, Christopher [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Childs, Hank [Univ. of Oregon, Eugene, OR (United States); Ma, Kwan-Liu [Univ. of California, Davis, CA (United States); Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States); Meredith, Jeremy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  12. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY17.

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pugmire, David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rogers, David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Childs, Hank [Univ. of Oregon, Eugene, OR (United States); Ma, Kwan-Liu [Univ. of California, Davis, CA (United States); Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States)

    2017-10-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  13. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem. Mid-year report FY16 Q2

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D.; Sewell, Christopher (LANL); Childs, Hank (U of Oregon); Ma, Kwan-Liu (UC Davis); Geveci, Berk (Kitware); Meredith, Jeremy (ORNL)

    2016-05-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  14. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Mid-year report FY17 Q2

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pugmire, David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rogers, David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Childs, Hank [Univ. of Oregon, Eugene, OR (United States); Ma, Kwan-Liu [Univ. of California, Davis, CA (United States); Geveci, Berk [Kitware Inc., Clifton Park, NY (United States)

    2017-05-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  15. Enabling Extreme Scale Earth Science Applications at the Oak Ridge Leadership Computing Facility

    Science.gov (United States)

    Anantharaj, V. G.; Mozdzynski, G.; Hamrud, M.; Deconinck, W.; Smith, L.; Hack, J.

    2014-12-01

    The Oak Ridge Leadership Facility (OLCF), established at the Oak Ridge National Laboratory (ORNL) under the auspices of the U.S. Department of Energy (DOE), welcomes investigators from universities, government agencies, national laboratories and industry who are prepared to perform breakthrough research across a broad domain of scientific disciplines, including earth and space sciences. Titan, the OLCF flagship system, is currently listed as #2 in the Top500 list of supercomputers in the world, and the largest available for open science. The computational resources are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, sponsored by the U.S. DOE Office of Science. In 2014, over 2.25 billion core hours on Titan were awarded via INCITE projects., including 14% of the allocation toward earth sciences. The INCITE competition is also open to research scientists based outside the USA. In fact, international research projects account for 12% of the INCITE awards in 2014. The INCITE scientific review panel also includes 20% participation from international experts. Recent accomplishments in earth sciences at OLCF include the world's first continuous simulation of 21,000 years of earth's climate history (2009); and an unprecedented simulation of a magnitude 8 earthquake over 125 sq. miles. One of the ongoing international projects involves scaling the ECMWF Integrated Forecasting System (IFS) model to over 200K cores of Titan. ECMWF is a partner in the EU funded Collaborative Research into Exascale Systemware, Tools and Applications (CRESTA) project. The significance of the research carried out within this project is the demonstration of techniques required to scale current generation Petascale capable simulation codes towards the performance levels required for running on future Exascale systems. One of the techniques pursued by ECMWF is to use Fortran2008 coarrays to overlap computations and communications and

  16. Scientific Grand Challenges: Challenges in Climate Change Science and the Role of Computing at the Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.; Johnson, Gary M.; Washington, Warren M.

    2009-07-02

    The U.S. Department of Energy (DOE) Office of Biological and Environmental Research (BER) in partnership with the Office of Advanced Scientific Computing Research (ASCR) held a workshop on the challenges in climate change science and the role of computing at the extreme scale, November 6-7, 2008, in Bethesda, Maryland. At the workshop, participants identified the scientific challenges facing the field of climate science and outlined the research directions of highest priority that should be pursued to meet these challenges. Representatives from the national and international climate change research community as well as representatives from the high-performance computing community attended the workshop. This group represented a broad mix of expertise. Of the 99 participants, 6 were from international institutions. Before the workshop, each of the four panels prepared a white paper, which provided the starting place for the workshop discussions. These four panels of workshop attendees devoted to their efforts the following themes: Model Development and Integrated Assessment; Algorithms and Computational Environment; Decadal Predictability and Prediction; Data, Visualization, and Computing Productivity. The recommendations of the panels are summarized in the body of this report.

  17. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.

    2013-01-01

    As our understanding of the world around us increases it becomes more challenging to make use of what we already know, and to increase our understanding still further. Computational modeling and simulation have become critical tools in addressing this challenge. The requirements of high-resolution, accurate modeling have outstripped the ability of desktop computers and even small clusters to provide the necessary compute power. Many applications in the scientific and engineering domains now need very large amounts of compute time, while other applications, particularly in the life sciences, frequently have large data I/O requirements. There is thus a growing need for a range of high performance applications which can utilize parallel compute systems effectively, which have efficient data handling strategies and which have the capacity to utilise current and future systems. The High Performance and Scientific Applications topic aims to highlight recent progress in the use of advanced computing and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators, and to deal with difficult I/O requirements. © 2013 Springer-Verlag.

  18. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.; Roller, Sabine P.; Seitsonen, Ari Paavo; Valcke, Sophie; Keyes, David E.; Sawley, Marie Christine; Schulthess, Thomas C.; Shalf, John M.

    2013-01-01

    and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators

  19. Final Technical Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    Energy Technology Data Exchange (ETDEWEB)

    Knio, Omar M. [Duke Univ., Durham, NC (United States). Dept. of Mechanical Engineering and Materials Science

    2017-06-06

    QUEST is a SciDAC Institute comprising Sandia National Laboratories, Los Alamos National Laboratory, University of Southern California, Massachusetts Institute of Technology, University of Texas at Austin, and Duke University. The mission of QUEST is to: (1) develop a broad class of uncertainty quantification (UQ) methods/tools, and (2) provide UQ expertise and software to other SciDAC projects, thereby enabling/guiding their UQ activities. The Duke effort focused on the development of algorithms and utility software for non-intrusive sparse UQ representations, and on participation in the organization of annual workshops and tutorials to disseminate UQ tools to the community, and to gather input in order to adapt approaches to the needs of SciDAC customers. In particular, fundamental developments were made in (a) multiscale stochastic preconditioners, (b) gradient-based approaches to inverse problems, (c) adaptive pseudo-spectral approximations, (d) stochastic limit cycles, and (e) sensitivity analysis tools for noisy systems. In addition, large-scale demonstrations were performed, namely in the context of ocean general circulation models.

  20. Scientific Grand Challenges: Discovery In Basic Energy Sciences: The Role of Computing at the Extreme Scale - August 13-15, 2009, Washington, D.C.

    Energy Technology Data Exchange (ETDEWEB)

    Galli, Giulia [Univ. of California, Davis, CA (United States). Workshop Chair; Dunning, Thom [Univ. of Illinois, Urbana, IL (United States). Workshop Chair

    2009-08-13

    The U.S. Department of Energy’s (DOE) Office of Basic Energy Sciences (BES) and Office of Advanced Scientific Computing Research (ASCR) workshop in August 2009 on extreme-scale computing provided a forum for more than 130 researchers to explore the needs and opportunities that will arise due to expected dramatic advances in computing power over the next decade. This scientific community firmly believes that the development of advanced theoretical tools within chemistry, physics, and materials science—combined with the development of efficient computational techniques and algorithms—has the potential to revolutionize the discovery process for materials and molecules with desirable properties. Doing so is necessary to meet the energy and environmental challenges of the 21st century as described in various DOE BES Basic Research Needs reports. Furthermore, computational modeling and simulation are a crucial complement to experimental studies, particularly when quantum mechanical processes controlling energy production, transformations, and storage are not directly observable and/or controllable. Many processes related to the Earth’s climate and subsurface need better modeling capabilities at the molecular level, which will be enabled by extreme-scale computing.

  1. Software challenges in extreme scale systems

    International Nuclear Information System (INIS)

    Sarkar, Vivek; Harrod, William; Snavely, Allan E

    2009-01-01

    Computer systems anticipated in the 2015 - 2020 timeframe are referred to as Extreme Scale because they will be built using massive multi-core processors with 100's of cores per chip. The largest capability Extreme Scale system is expected to deliver Exascale performance of the order of 10 18 operations per second. These systems pose new critical challenges for software in the areas of concurrency, energy efficiency and resiliency. In this paper, we discuss the implications of the concurrency and energy efficiency challenges on future software for Extreme Scale Systems. From an application viewpoint, the concurrency and energy challenges boil down to the ability to express and manage parallelism and locality by exploring a range of strong scaling and new-era weak scaling techniques. For expressing parallelism and locality, the key challenges are the ability to expose all of the intrinsic parallelism and locality in a programming model, while ensuring that this expression of parallelism and locality is portable across a range of systems. For managing parallelism and locality, the OS-related challenges include parallel scalability, spatial partitioning of OS and application functionality, direct hardware access for inter-processor communication, and asynchronous rather than interrupt-driven events, which are accompanied by runtime system challenges for scheduling, synchronization, memory management, communication, performance monitoring, and power management. We conclude by discussing the importance of software-hardware co-design in addressing the fundamental challenges for application enablement on Extreme Scale systems.

  2. Extreme-scale Algorithms and Solver Resilience

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States)

    2016-12-10

    A widening gap exists between the peak performance of high-performance computers and the performance achieved by complex applications running on these platforms. Over the next decade, extreme-scale systems will present major new challenges to algorithm development that could amplify this mismatch in such a way that it prevents the productive use of future DOE Leadership computers due to the following; Extreme levels of parallelism due to multicore processors; An increase in system fault rates requiring algorithms to be resilient beyond just checkpoint/restart; Complex memory hierarchies and costly data movement in both energy and performance; Heterogeneous system architectures (mixing CPUs, GPUs, etc.); and Conflicting goals of performance, resilience, and power requirements.

  3. Extreme-Scale De Novo Genome Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Georganas, Evangelos [Intel Corporation, Santa Clara, CA (United States); Hofmeyr, Steven [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.; Egan, Rob [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Buluc, Aydin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.; Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.; Rokhsar, Daniel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Yelick, Katherine [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.

    2017-09-26

    De novo whole genome assembly reconstructs genomic sequence from short, overlapping, and potentially erroneous DNA segments and is one of the most important computations in modern genomics. This work presents HipMER, a high-quality end-to-end de novo assembler designed for extreme scale analysis, via efficient parallelization of the Meraculous code. Genome assembly software has many components, each of which stresses different components of a computer system. This chapter explains the computational challenges involved in each step of the HipMer pipeline, the key distributed data structures, and communication costs in detail. We present performance results of assembling the human genome and the large hexaploid wheat genome on large supercomputers up to tens of thousands of cores.

  4. Gravo-Aeroelastic Scaling for Extreme-Scale Wind Turbines

    Energy Technology Data Exchange (ETDEWEB)

    Fingersh, Lee J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Loth, Eric [University of Virginia; Kaminski, Meghan [University of Virginia; Qin, Chao [University of Virginia; Griffith, D. Todd [Sandia National Laboratories

    2017-06-09

    A scaling methodology is described in the present paper for extreme-scale wind turbines (rated at 10 MW or more) that allow their sub-scale turbines to capture their key blade dynamics and aeroelastic deflections. For extreme-scale turbines, such deflections and dynamics can be substantial and are primarily driven by centrifugal, thrust and gravity forces as well as the net torque. Each of these are in turn a function of various wind conditions, including turbulence levels that cause shear, veer, and gust loads. The 13.2 MW rated SNL100-03 rotor design, having a blade length of 100-meters, is herein scaled to the CART3 wind turbine at NREL using 25% geometric scaling and blade mass and wind speed scaled by gravo-aeroelastic constraints. In order to mimic the ultralight structure on the advanced concept extreme-scale design the scaling results indicate that the gravo-aeroelastically scaled blades for the CART3 are be three times lighter and 25% longer than the current CART3 blades. A benefit of this scaling approach is that the scaled wind speeds needed for testing are reduced (in this case by a factor of two), allowing testing under extreme gust conditions to be much more easily achieved. Most importantly, this scaling approach can investigate extreme-scale concepts including dynamic behaviors and aeroelastic deflections (including flutter) at an extremely small fraction of the full-scale cost.

  5. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    Energy Technology Data Exchange (ETDEWEB)

    Xiu, Dongbin [Univ. of Utah, Salt Lake City, UT (United States)

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  6. Application of the extreme value theory to beam loss estimates in the SPIRAL2 linac based on large scale Monte Carlo computations

    Directory of Open Access Journals (Sweden)

    R. Duperrier

    2006-04-01

    Full Text Available The influence of random perturbations of high intensity accelerator elements on the beam losses is considered. This paper presents the error sensitivity study which has been performed for the SPIRAL2 linac in order to define the tolerances for the construction. The proposed driver aims to accelerate a 5 mA deuteron beam up to 20   A MeV and a 1 mA ion beam for q/A=1/3 up to 14.5 A MeV. It is a continuous wave regime linac, designed for a maximum efficiency in the transmission of intense beams and a tunable energy. It consists in an injector (two   ECRs   sources+LEBTs with the possibility to inject from several sources+radio frequency quadrupole followed by a superconducting section based on an array of independently phased cavities where the transverse focalization is performed with warm quadrupoles. The correction scheme and the expected losses are described. The extreme value theory is used to estimate the expected beam losses. The described method couples large scale computations to obtain probability distribution functions. The bootstrap technique is used to provide confidence intervals associated to the beam loss predictions. With such a method, it is possible to measure the risk to loose a few watts in this high power linac (up to 200 kW.

  7. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  8. Climatic forecast: down-scaling and extremes

    International Nuclear Information System (INIS)

    Deque, M.; Li, L.

    2007-01-01

    There is a strong demand for specifying the future climate at local scale and about extreme events. New methods, allowing a better output from the climate models, are currently being developed and French laboratories involved in the Escrime project are actively participating. (authors)

  9. Asynchronous schemes for CFD at extreme scales

    Science.gov (United States)

    Konduri, Aditya; Donzis, Diego

    2013-11-01

    Recent advances in computing hardware and software have made simulations an indispensable research tool in understanding fluid flow phenomena in complex conditions at great detail. Due to the nonlinear nature of the governing NS equations, simulations of high Re turbulent flows are computationally very expensive and demand for extreme levels of parallelism. Current large simulations are being done on hundreds of thousands of processing elements (PEs). Benchmarks from these simulations show that communication between PEs take a substantial amount of time, overwhelming the compute time, resulting in substantial waste in compute cycles as PEs remain idle. We investigate a novel approach based on widely used finite-difference schemes in which computations are carried out asynchronously, i.e. synchronization of data among PEs is not enforced and computations proceed regardless of the status of messages. This drastically reduces PE idle time and results in much larger computation rates. We show that while these schemes remain stable, their accuracy is significantly affected. We present new schemes that maintain accuracy under asynchronous conditions and provide a viable path towards exascale computing. Performance of these schemes will be shown for simple models like Burgers' equation.

  10. Extreme Physics and Informational/Computational Limits

    Energy Technology Data Exchange (ETDEWEB)

    Di Sia, Paolo, E-mail: paolo.disia@univr.it, E-mail: 10alla33@virgilio.it [Department of Computer Science, Faculty of Science, Verona University, Strada Le Grazie 15, I-37134 Verona (Italy) and Faculty of Computer Science, Free University of Bozen, Piazza Domenicani 3, I-39100 Bozen-Bolzano (Italy)

    2011-07-08

    A sector of the current theoretical physics, even called 'extreme physics', deals with topics concerning superstring theories, multiverse, quantum teleportation, negative energy, and more, that only few years ago were considered scientific imaginations or purely speculative physics. Present experimental lines of evidence and implications of cosmological observations seem on the contrary support such theories. These new physical developments lead to informational limits, as the quantity of information, that a physical system can record, and computational limits, resulting from considerations regarding black holes and space-time fluctuations. In this paper I consider important limits for information and computation resulting in particular from string theories and its foundations.

  11. Extreme Physics and Informational/Computational Limits

    International Nuclear Information System (INIS)

    Di Sia, Paolo

    2011-01-01

    A sector of the current theoretical physics, even called 'extreme physics', deals with topics concerning superstring theories, multiverse, quantum teleportation, negative energy, and more, that only few years ago were considered scientific imaginations or purely speculative physics. Present experimental lines of evidence and implications of cosmological observations seem on the contrary support such theories. These new physical developments lead to informational limits, as the quantity of information, that a physical system can record, and computational limits, resulting from considerations regarding black holes and space-time fluctuations. In this paper I consider important limits for information and computation resulting in particular from string theories and its foundations.

  12. Computational discovery of extremal microstructure families

    Science.gov (United States)

    Chen, Desai; Skouras, Mélina; Zhu, Bo; Matusik, Wojciech

    2018-01-01

    Modern fabrication techniques, such as additive manufacturing, can be used to create materials with complex custom internal structures. These engineered materials exhibit a much broader range of bulk properties than their base materials and are typically referred to as metamaterials or microstructures. Although metamaterials with extraordinary properties have many applications, designing them is very difficult and is generally done by hand. We propose a computational approach to discover families of microstructures with extremal macroscale properties automatically. Using efficient simulation and sampling techniques, we compute the space of mechanical properties covered by physically realizable microstructures. Our system then clusters microstructures with common topologies into families. Parameterized templates are eventually extracted from families to generate new microstructure designs. We demonstrate these capabilities on the computational design of mechanical metamaterials and present five auxetic microstructure families with extremal elastic material properties. Our study opens the way for the completely automated discovery of extremal microstructures across multiple domains of physics, including applications reliant on thermal, electrical, and magnetic properties. PMID:29376124

  13. Frameworks for visualization at the extreme scale

    International Nuclear Information System (INIS)

    Joy, Kenneth I; Miller, Mark; Childs, Hank; Bethel, E Wes; Clyne, John; Ostrouchov, George; Ahern, Sean

    2007-01-01

    The challenges of visualization at the extreme scale involve issues of scale, complexity, temporal exploration and uncertainty. The Visualization and Analytics Center for Enabling Technologies (VACET) focuses on leveraging scientific visualization and analytics software technology as an enabling technology to increased scientific discovery and insight. In this paper, we introduce new uses of visualization frameworks through the introduction of Equivalence Class Functions (ECFs). These functions give a new class of derived quantities designed to greatly expand the ability of the end user to explore and visualize data. ECFs are defined over equivalence classes (i.e., groupings) of elements from an original mesh, and produce summary values for the classes as output. ECFs can be used in the visualization process to directly analyze data, or can be used to synthesize new derived quantities on the original mesh. The design of ECFs enable a parallel implementation that allows the use of these techniques on massive data sets that require parallel processing

  14. Multi-level programming paradigm for extreme computing

    International Nuclear Information System (INIS)

    Petiton, S.; Sato, M.; Emad, N.; Calvin, C.; Tsuji, M.; Dandouna, M.

    2013-01-01

    In order to propose a framework and programming paradigms for post peta-scale computing, on the road to exa-scale computing and beyond, we introduced new languages, associated with a hierarchical multi-level programming paradigm, allowing scientific end-users and developers to program highly hierarchical architectures designed for extreme computing. In this paper, we explain the interest of such hierarchical multi-level programming paradigm for extreme computing and its well adaptation to several large computational science applications, such as for linear algebra solvers used for reactor core physic. We describe the YML language and framework allowing describing graphs of parallel components, which may be developed using PGAS-like language such as XMP, scheduled and computed on supercomputers. Then, we propose experimentations on supercomputers (such as the 'K' and 'Hooper' ones) of the hybrid method MERAM (Multiple Explicitly Restarted Arnoldi Method) as a case study for iterative methods manipulating sparse matrices, and the block Gauss-Jordan method as a case study for direct method manipulating dense matrices. We conclude proposing evolutions for this programming paradigm. (authors)

  15. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  16. Computational data sciences for assessment and prediction of climate extremes

    Science.gov (United States)

    Ganguly, A. R.

    2011-12-01

    Climate extremes may be defined inclusively as severe weather events or large shifts in global or regional weather patterns which may be caused or exacerbated by natural climate variability or climate change. This area of research arguably represents one of the largest knowledge-gaps in climate science which is relevant for informing resource managers and policy makers. While physics-based climate models are essential in view of non-stationary and nonlinear dynamical processes, their current pace of uncertainty reduction may not be adequate for urgent stakeholder needs. The structure of the models may in some cases preclude reduction of uncertainty for critical processes at scales or for the extremes of interest. On the other hand, methods based on complex networks, extreme value statistics, machine learning, and space-time data mining, have demonstrated significant promise to improve scientific understanding and generate enhanced predictions. When combined with conceptual process understanding at multiple spatiotemporal scales and designed to handle massive data, interdisciplinary data science methods and algorithms may complement or supplement physics-based models. Specific examples from the prior literature and our ongoing work suggests how data-guided improvements may be possible, for example, in the context of ocean meteorology, climate oscillators, teleconnections, and atmospheric process understanding, which in turn can improve projections of regional climate, precipitation extremes and tropical cyclones in an useful and interpretable fashion. A community-wide effort is motivated to develop and adapt computational data science tools for translating climate model simulations to information relevant for adaptation and policy, as well as for improving our scientific understanding of climate extremes from both observed and model-simulated data.

  17. Large Scale Meteorological Pattern of Extreme Rainfall in Indonesia

    Science.gov (United States)

    Kuswanto, Heri; Grotjahn, Richard; Rachmi, Arinda; Suhermi, Novri; Oktania, Erma; Wijaya, Yosep

    2014-05-01

    Extreme Weather Events (EWEs) cause negative impacts socially, economically, and environmentally. Considering these facts, forecasting EWEs is crucial work. Indonesia has been identified as being among the countries most vulnerable to the risk of natural disasters, such as floods, heat waves, and droughts. Current forecasting of extreme events in Indonesia is carried out by interpreting synoptic maps for several fields without taking into account the link between the observed events in the 'target' area with remote conditions. This situation may cause misidentification of the event leading to an inaccurate prediction. Grotjahn and Faure (2008) compute composite maps from extreme events (including heat waves and intense rainfall) to help forecasters identify such events in model output. The composite maps show large scale meteorological patterns (LSMP) that occurred during historical EWEs. Some vital information about the EWEs can be acquired from studying such maps, in addition to providing forecaster guidance. Such maps have robust mid-latitude meteorological patterns (for Sacramento and California Central Valley, USA EWEs). We study the performance of the composite approach for tropical weather condition such as Indonesia. Initially, the composite maps are developed to identify and forecast the extreme weather events in Indramayu district- West Java, the main producer of rice in Indonesia and contributes to about 60% of the national total rice production. Studying extreme weather events happening in Indramayu is important since EWEs there affect national agricultural and fisheries activities. During a recent EWE more than a thousand houses in Indramayu suffered from serious flooding with each home more than one meter underwater. The flood also destroyed a thousand hectares of rice plantings in 5 regencies. Identifying the dates of extreme events is one of the most important steps and has to be carried out carefully. An approach has been applied to identify the

  18. Faster Parallel Traversal of Scale Free Graphs at Extreme Scale with Vertex Delegates

    KAUST Repository

    Pearce, Roger

    2014-11-01

    © 2014 IEEE. At extreme scale, irregularities in the structure of scale-free graphs such as social network graphs limit our ability to analyze these important and growing datasets. A key challenge is the presence of high-degree vertices (hubs), that leads to parallel workload and storage imbalances. The imbalances occur because existing partitioning techniques are not able to effectively partition high-degree vertices. We present techniques to distribute storage, computation, and communication of hubs for extreme scale graphs in distributed memory supercomputers. To balance the hub processing workload, we distribute hub data structures and related computation among a set of delegates. The delegates coordinate using highly optimized, yet portable, asynchronous broadcast and reduction operations. We demonstrate scalability of our new algorithmic technique using Breadth-First Search (BFS), Single Source Shortest Path (SSSP), K-Core Decomposition, and Page-Rank on synthetically generated scale-free graphs. Our results show excellent scalability on large scale-free graphs up to 131K cores of the IBM BG/P, and outperform the best known Graph500 performance on BG/P Intrepid by 15%

  19. Faster Parallel Traversal of Scale Free Graphs at Extreme Scale with Vertex Delegates

    KAUST Repository

    Pearce, Roger; Gokhale, Maya; Amato, Nancy M.

    2014-01-01

    © 2014 IEEE. At extreme scale, irregularities in the structure of scale-free graphs such as social network graphs limit our ability to analyze these important and growing datasets. A key challenge is the presence of high-degree vertices (hubs), that leads to parallel workload and storage imbalances. The imbalances occur because existing partitioning techniques are not able to effectively partition high-degree vertices. We present techniques to distribute storage, computation, and communication of hubs for extreme scale graphs in distributed memory supercomputers. To balance the hub processing workload, we distribute hub data structures and related computation among a set of delegates. The delegates coordinate using highly optimized, yet portable, asynchronous broadcast and reduction operations. We demonstrate scalability of our new algorithmic technique using Breadth-First Search (BFS), Single Source Shortest Path (SSSP), K-Core Decomposition, and Page-Rank on synthetically generated scale-free graphs. Our results show excellent scalability on large scale-free graphs up to 131K cores of the IBM BG/P, and outperform the best known Graph500 performance on BG/P Intrepid by 15%

  20. Improving the Performance of the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2014-01-01

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation-based toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1) a new deadlock resolution protocol to reduce the parallel discrete event simulation management overhead and (2) a new simulated MPI message matching algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement, such as by reducing the simulation overhead for running the NAS Parallel Benchmark suite inside the simulator from 1,020\\% to 238% for the conjugate gradient (CG) benchmark and from 102% to 0% for the embarrassingly parallel (EP) and benchmark, as well as, from 37,511% to 13,808% for CG and from 3,332% to 204% for EP with accurate process failure simulation.

  1. Stereology of extremes; bivariate models and computation

    Czech Academy of Sciences Publication Activity Database

    Beneš, Viktor; Bodlák, M.; Hlubinka, D.

    2003-01-01

    Roč. 5, č. 3 (2003), s. 289-308 ISSN 1387-5841 R&D Projects: GA AV ČR IAA1075201; GA ČR GA201/03/0946 Institutional research plan: CEZ:AV0Z1075907 Keywords : sample extremes * domain of attraction * normalizing constants Subject RIV: BA - General Mathematics

  2. The Spatial Scaling of Global Rainfall Extremes

    Science.gov (United States)

    Devineni, N.; Xi, C.; Lall, U.; Rahill-Marier, B.

    2013-12-01

    Floods associated with severe storms are a significant source of risk for property, life and supply chains. These property losses tend to be determined as much by the duration of flooding as by the depth and velocity of inundation. High duration floods are typically induced by persistent rainfall (upto 30 day duration) as seen recently in Thailand, Pakistan, the Ohio and the Mississippi Rivers, France, and Germany. Events related to persistent and recurrent rainfall appear to correspond to the persistence of specific global climate patterns that may be identifiable from global, historical data fields, and also from climate models that project future conditions. A clear understanding of the space-time rainfall patterns for events or for a season will enable in assessing the spatial distribution of areas likely to have a high/low inundation potential for each type of rainfall forcing. In this paper, we investigate the statistical properties of the spatial manifestation of the rainfall exceedances. We also investigate the connection of persistent rainfall events at different latitudinal bands to large-scale climate phenomena such as ENSO. Finally, we present the scaling phenomena of contiguous flooded areas as a result of large scale organization of long duration rainfall events. This can be used for spatially distributed flood risk assessment conditional on a particular rainfall scenario. Statistical models for spatio-temporal loss simulation including model uncertainty to support regional and portfolio analysis can be developed.

  3. Scaling a Survey Course in Extreme Weather

    Science.gov (United States)

    Samson, P. J.

    2013-12-01

    "Extreme Weather" is a survey-level course offered at the University of Michigan that is broadcast via the web and serves as a research testbed to explore best practices for large class conduct. The course has led to the creation of LectureTools, a web-based student response and note-taking system that has been shown to increase student engagement dramatically in multiple courses by giving students more opportunities to participate in class. Included in this is the capacity to pose image-based questions (see image where question was "Where would you expect winds from the south") as well as multiple choice, ordered list, free response and numerical questions. Research in this class has also explored differences in learning outcomes from those who participate remotely versus those who physically come to class and found little difference. Moreover the technologies used allow instructors to conduct class from wherever they are while the students can still answer questions and engage in class discussion from wherever they are. This presentation will use LectureTools to demonstrate its features. Attendees are encouraged to bring a mobile device to the session to participate.

  4. Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution

    Science.gov (United States)

    Rajulapati, C. R.; Mujumdar, P. P.

    2017-12-01

    Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.

  5. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  6. Censored rainfall modelling for estimation of fine-scale extremes

    Science.gov (United States)

    Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro

    2018-01-01

    Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.

  7. Making extreme computations possible with virtual machines

    International Nuclear Information System (INIS)

    Reuter, J.; Chokoufe Nejad, B.

    2016-02-01

    State-of-the-art algorithms generate scattering amplitudes for high-energy physics at leading order for high-multiplicity processes as compiled code (in Fortran, C or C++). For complicated processes the size of these libraries can become tremendous (many GiB). We show that amplitudes can be translated to byte-code instructions, which even reduce the size by one order of magnitude. The byte-code is interpreted by a Virtual Machine with runtimes comparable to compiled code and a better scaling with additional legs. We study the properties of this algorithm, as an extension of the Optimizing Matrix Element Generator (O'Mega). The bytecode matrix elements are available as alternative input for the event generator WHIZARD. The bytecode interpreter can be implemented very compactly, which will help with a future implementation on massively parallel GPUs.

  8. Investigating the Scaling Properties of Extreme Rainfall Depth ...

    African Journals Online (AJOL)

    Investigating the Scaling Properties of Extreme Rainfall Depth Series in Oromia Regional State, Ethiopia. ... Science, Technology and Arts Research Journal ... for storm duration ranging from 0.5 to 24 hr observed at network of rain gauges sited in Oromia regional state were analyzed using an approach based on moments.

  9. Temporal and spatial scaling impacts on extreme precipitation

    Science.gov (United States)

    Eggert, B.; Berg, P.; Haerter, J. O.; Jacob, D.; Moseley, C.

    2015-01-01

    Both in the current climate and in the light of climate change, understanding of the causes and risk of precipitation extremes is essential for protection of human life and adequate design of infrastructure. Precipitation extreme events depend qualitatively on the temporal and spatial scales at which they are measured, in part due to the distinct types of rain formation processes that dominate extremes at different scales. To capture these differences, we first filter large datasets of high-resolution radar measurements over Germany (5 min temporally and 1 km spatially) using synoptic cloud observations, to distinguish convective and stratiform rain events. In a second step, for each precipitation type, the observed data are aggregated over a sequence of time intervals and spatial areas. The resulting matrix allows a detailed investigation of the resolutions at which convective or stratiform events are expected to contribute most to the extremes. We analyze where the statistics of the two types differ and discuss at which resolutions transitions occur between dominance of either of the two precipitation types. We characterize the scales at which the convective or stratiform events will dominate the statistics. For both types, we further develop a mapping between pairs of spatially and temporally aggregated statistics. The resulting curve is relevant when deciding on data resolutions where statistical information in space and time is balanced. Our study may hence also serve as a practical guide for modelers, and for planning the space-time layout of measurement campaigns. We also describe a mapping between different pairs of resolutions, possibly relevant when working with mismatched model and observational resolutions, such as in statistical bias correction.

  10. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  11. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  12. Quantum universe on extremely small space-time scales

    International Nuclear Information System (INIS)

    Kuzmichev, V.E.; Kuzmichev, V.V.

    2010-01-01

    The semiclassical approach to the quantum geometrodynamical model is used for the description of the properties of the Universe on extremely small space-time scales. Under this approach, the matter in the Universe has two components of the quantum nature which behave as antigravitating fluids. The first component does not vanish in the limit h → 0 and can be associated with dark energy. The second component is described by an extremely rigid equation of state and goes to zero after the transition to large spacetime scales. On small space-time scales, this quantum correction turns out to be significant. It determines the geometry of the Universe near the initial cosmological singularity point. This geometry is conformal to a unit four-sphere embedded in a five-dimensional Euclidean flat space. During the consequent expansion of the Universe, when reaching the post-Planck era, the geometry of the Universe changes into that conformal to a unit four-hyperboloid in a five-dimensional Lorentzsignatured flat space. This agrees with the hypothesis about the possible change of geometry after the origin of the expanding Universe from the region near the initial singularity point. The origin of the Universe can be interpreted as a quantum transition of the system from a region in the phase space forbidden for the classical motion, but where a trajectory in imaginary time exists, into a region, where the equations of motion have the solution which describes the evolution of the Universe in real time. Near the boundary between two regions, from the side of real time, the Universe undergoes almost an exponential expansion which passes smoothly into the expansion under the action of radiation dominating over matter which is described by the standard cosmological model.

  13. Visualization and parallel I/O at extreme scale

    International Nuclear Information System (INIS)

    Ross, R B; Peterka, T; Shen, H-W; Hong, Y; Ma, K-L; Yu, H; Moreland, K

    2008-01-01

    In our efforts to solve ever more challenging problems through computational techniques, the scale of our compute systems continues to grow. As we approach petascale, it becomes increasingly important that all the resources in the system be used as efficiently as possible, not just the floating-point units. Because of hardware, software, and usability challenges, storage resources are often one of the most poorly used and performing components of today's compute systems. This situation can be especially true in the case of the analysis phases of scientific workflows. In this paper we discuss the impact of large-scale data on visual analysis operations and examine a collection of approaches to I/O in the visual analysis process. First we examine the performance of volume rendering on a leadership-computing platform and assess the relative cost of I/O, rendering, and compositing operations. Next we analyze the performance implications of eliminating preprocessing from this example workflow. Then we describe a technique that uses data reorganization to improve access times for data-intensive volume rendering

  14. Computer-Administered Interviews and Rating Scales

    Science.gov (United States)

    Garb, Howard N.

    2007-01-01

    To evaluate the value of computer-administered interviews and rating scales, the following topics are reviewed in the present article: (a) strengths and weaknesses of structured and unstructured assessment instruments, (b) advantages and disadvantages of computer administration, and (c) the validity and utility of computer-administered interviews…

  15. Verifying a computational method for predicting extreme ground motion

    Science.gov (United States)

    Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

    2011-01-01

    In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

  16. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Directory of Open Access Journals (Sweden)

    Jakob Jordan

    2018-02-01

    Full Text Available State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  17. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  18. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  19. Validity and Reliability of the Upper Extremity Work Demands Scale.

    Science.gov (United States)

    Jacobs, Nora W; Berduszek, Redmar J; Dijkstra, Pieter U; van der Sluis, Corry K

    2017-12-01

    Purpose To evaluate validity and reliability of the upper extremity work demands (UEWD) scale. Methods Participants from different levels of physical work demands, based on the Dictionary of Occupational Titles categories, were included. A historical database of 74 workers was added for factor analysis. Criterion validity was evaluated by comparing observed and self-reported UEWD scores. To assess structural validity, a factor analysis was executed. For reliability, the difference between two self-reported UEWD scores, the smallest detectable change (SDC), test-retest reliability and internal consistency were determined. Results Fifty-four participants were observed at work and 51 of them filled in the UEWD twice with a mean interval of 16.6 days (SD 3.3, range = 10-25 days). Criterion validity of the UEWD scale was moderate (r = .44, p = .001). Factor analysis revealed that 'force and posture' and 'repetition' subscales could be distinguished with Cronbach's alpha of .79 and .84, respectively. Reliability was good; there was no significant difference between repeated measurements. An SDC of 5.0 was found. Test-retest reliability was good (intraclass correlation coefficient for agreement = .84) and all item-total correlations were >.30. There were two pairs of highly related items. Conclusion Reliability of the UEWD scale was good, but criterion validity was moderate. Based on current results, a modified UEWD scale (2 items removed, 1 item reworded, divided into 2 subscales) was proposed. Since observation appeared to be an inappropriate gold standard, we advise to investigate other types of validity, such as construct validity, in further research.

  20. Spatial Scaling of Global Rainfall and Flood Extremes

    Science.gov (United States)

    Devineni, Naresh; Lall, Upmanu; Xi, Chen; Ward, Philip

    2014-05-01

    Floods associated with severe storms are a significant source of risk for property, life and supply chains. These property losses tend to be determined as much by the duration and spatial extent of flooding as by the depth and velocity of inundation. High duration floods are typically induced by persistent rainfall (up to 30 day duration) as seen recently in Thailand, Pakistan, the Ohio and the Mississippi Rivers, France, and Germany. Events related to persistent and recurrent rainfall appear to correspond to the persistence of specific global climate patterns that may be identifiable from global, historical data fields, and also from climate models that project future conditions. In this paper, we investigate the statistical properties of the spatial manifestation of the rainfall exceedances and floods. We present the first ever results on a global analysis of the scaling characteristics of extreme rainfall and flood event duration, volumes and contiguous flooded areas as a result of large scale organization of long duration rainfall events. Results are organized by latitude and with reference to the phases of ENSO, and reveal surprising invariance across latitude. Speculation as to the potential relation to the dynamical factors is presented

  1. Engineering of an Extreme Rainfall Detection System using Grid Computing

    Directory of Open Access Journals (Sweden)

    Olivier Terzo

    2012-10-01

    Full Text Available This paper describes a new approach for intensive rainfall data analysis. ITHACA's Extreme Rainfall Detection System (ERDS is conceived to provide near real-time alerts related to potential exceptional rainfalls worldwide, which can be used by WFP or other humanitarian assistance organizations to evaluate the event and understand the potentially floodable areas where their assistance is needed. This system is based on precipitation analysis and it uses rainfall data from satellite at worldwide extent. This project uses the Tropical Rainfall Measuring Mission Multisatellite Precipitation Analysis dataset, a NASA-delivered near real-time product for current rainfall condition monitoring over the world. Considering the great deal of data to process, this paper presents an architectural solution based on Grid Computing techniques. Our focus is on the advantages of using a distributed architecture in terms of performances for this specific purpose.

  2. Large-scale computing with Quantum Espresso

    International Nuclear Information System (INIS)

    Giannozzi, P.; Cavazzoni, C.

    2009-01-01

    This paper gives a short introduction to Quantum Espresso: a distribution of software for atomistic simulations in condensed-matter physics, chemical physics, materials science, and to its usage in large-scale parallel computing.

  3. Differential Juvenile Hormone Variations in Scale Insect Extreme Sexual Dimorphism.

    Directory of Open Access Journals (Sweden)

    Isabelle Mifom Vea

    Full Text Available Scale insects have evolved extreme sexual dimorphism, as demonstrated by sedentary juvenile-like females and ephemeral winged males. This dimorphism is established during the post-embryonic development; however, the underlying regulatory mechanisms have not yet been examined. We herein assessed the role of juvenile hormone (JH on the diverging developmental pathways occurring in the male and female Japanese mealybug Planococcus kraunhiae (Kuwana. We provide, for the first time, detailed gene expression profiles related to JH signaling in scale insects. Prior to adult emergence, the transcript levels of JH acid O-methyltransferase, encoding a rate-limiting enzyme in JH biosynthesis, were higher in males than in females, suggesting that JH levels are higher in males. Furthermore, male quiescent pupal-like stages were associated with higher transcript levels of the JH receptor gene, Methoprene-tolerant and its co-activator taiman, as well as the JH early-response genes, Krüppel homolog 1 and broad. The exposure of male juveniles to an ectopic JH mimic prolonged the expression of Krüppel homolog 1 and broad, and delayed adult emergence by producing a supernumeral pupal stage. We propose that male wing development is first induced by up-regulated JH signaling compared to female expression pattern, but a decrease at the end of the prepupal stage is necessary for adult emergence, as evidenced by the JH mimic treatments. Furthermore, wing development seems linked to JH titers as JHM treatments on the pupal stage led to wing deformation. The female pedomorphic appearance was not reflected by the maintenance of high levels of JH. The results in this study suggest that differential variations in JH signaling may be responsible for sex-specific and radically different modes of metamorphosis.

  4. Computational applications of DNA physical scales

    DEFF Research Database (Denmark)

    Baldi, Pierre; Chauvin, Yves; Brunak, Søren

    1998-01-01

    that these scales provide an alternative or complementary compact representation of DNA sequences. As an example we construct a strand invariant representation of DNA sequences. The scales can also be used to analyze and discover new DNA structural patterns, especially in combinations with hidden Markov models......The authors study from a computational standpoint several different physical scales associated with structural features of DNA sequences, including dinucleotide scales such as base stacking energy and propellor twist, and trinucleotide scales such as bendability and nucleosome positioning. We show...

  5. Computational applications of DNA structural scales

    DEFF Research Database (Denmark)

    Baldi, P.; Chauvin, Y.; Brunak, Søren

    1998-01-01

    that these scales provide an alternative or complementary compact representation of DNA sequences. As an example, we construct a strand-invariant representation of DNA sequences. The scales can also be used to analyze and discover new DNA structural patterns, especially in combination with hidden Markov models......Studies several different physical scales associated with the structural features of DNA sequences from a computational standpoint, including dinucleotide scales, such as base stacking energy and propeller twist, and trinucleotide scales, such as bendability and nucleosome positioning. We show...

  6. Simple, parallel, high-performance virtual machines for extreme computations

    International Nuclear Information System (INIS)

    Chokoufe Nejad, Bijan; Ohl, Thorsten; Reuter, Jurgen

    2014-11-01

    We introduce a high-performance virtual machine (VM) written in a numerically fast language like Fortran or C to evaluate very large expressions. We discuss the general concept of how to perform computations in terms of a VM and present specifically a VM that is able to compute tree-level cross sections for any number of external legs, given the corresponding byte code from the optimal matrix element generator, O'Mega. Furthermore, this approach allows to formulate the parallel computation of a single phase space point in a simple and obvious way. We analyze hereby the scaling behaviour with multiple threads as well as the benefits and drawbacks that are introduced with this method. Our implementation of a VM can run faster than the corresponding native, compiled code for certain processes and compilers, especially for very high multiplicities, and has in general runtimes in the same order of magnitude. By avoiding the tedious compile and link steps, which may fail for source code files of gigabyte sizes, new processes or complex higher order corrections that are currently out of reach could be evaluated with a VM given enough computing power.

  7. Scaling and clustering effects of extreme precipitation distributions

    Science.gov (United States)

    Zhang, Qiang; Zhou, Yu; Singh, Vijay P.; Li, Jianfeng

    2012-08-01

    SummaryOne of the impacts of climate change and human activities on the hydrological cycle is the change in the precipitation structure. Closely related to the precipitation structure are two characteristics: the volume (m) of wet periods (WPs) and the time interval between WPs or waiting time (t). Using daily precipitation data for a period of 1960-2005 from 590 rain gauge stations in China, these two characteristics are analyzed, involving scaling and clustering of precipitation episodes. Our findings indicate that m and t follow similar probability distribution curves, implying that precipitation processes are controlled by similar underlying thermo-dynamics. Analysis of conditional probability distributions shows a significant dependence of m and t on their previous values of similar volumes, and the dependence tends to be stronger when m is larger or t is longer. It indicates that a higher probability can be expected when high-intensity precipitation is followed by precipitation episodes with similar precipitation intensity and longer waiting time between WPs is followed by the waiting time of similar duration. This result indicates the clustering of extreme precipitation episodes and severe droughts or floods are apt to occur in groups.

  8. Scalable Parallel Methods for Analyzing Metagenomics Data at Extreme Scale

    International Nuclear Information System (INIS)

    Daily, Jeffrey A.

    2015-01-01

    The field of bioinformatics and computational biology is currently experiencing a data revolution. The exciting prospect of making fundamental biological discoveries is fueling the rapid development and deployment of numerous cost-effective, high-throughput next-generation sequencing technologies. The result is that the DNA and protein sequence repositories are being bombarded with new sequence information. Databases are continuing to report a Moore's law-like growth trajectory in their database sizes, roughly doubling every 18 months. In what seems to be a paradigm-shift, individual projects are now capable of generating billions of raw sequence data that need to be analyzed in the presence of already annotated sequence information. While it is clear that data-driven methods, such as sequencing homology detection, are becoming the mainstay in the field of computational life sciences, the algorithmic advancements essential for implementing complex data analytics at scale have mostly lagged behind. Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or 'homologous') on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. In this dissertation, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for a collection of 2.56M sequences show parallel efficiencies of ~75-100% on up to 8K

  9. Scalable Parallel Methods for Analyzing Metagenomics Data at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Daily, Jeffrey A. [Washington State Univ., Pullman, WA (United States)

    2015-05-01

    The field of bioinformatics and computational biology is currently experiencing a data revolution. The exciting prospect of making fundamental biological discoveries is fueling the rapid development and deployment of numerous cost-effective, high-throughput next-generation sequencing technologies. The result is that the DNA and protein sequence repositories are being bombarded with new sequence information. Databases are continuing to report a Moore’s law-like growth trajectory in their database sizes, roughly doubling every 18 months. In what seems to be a paradigm-shift, individual projects are now capable of generating billions of raw sequence data that need to be analyzed in the presence of already annotated sequence information. While it is clear that data-driven methods, such as sequencing homology detection, are becoming the mainstay in the field of computational life sciences, the algorithmic advancements essential for implementing complex data analytics at scale have mostly lagged behind. Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or “homologous”) on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. In this dissertation, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for a collection of 2.56M sequences show parallel efficiencies of ~75-100% on up to 8K cores

  10. A Fault Oblivious Extreme-Scale Execution Environment

    Energy Technology Data Exchange (ETDEWEB)

    McKie, Jim

    2014-11-20

    The FOX project, funded under the ASCR X-stack I program, developed systems software and runtime libraries for a new approach to the data and work distribution for massively parallel, fault oblivious application execution. Our work was motivated by the premise that exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today’s machines. To deliver the capability of exascale hardware, the systems software must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massive data analysis in a highly unreliable hardware environment with billions of threads of execution. Our OS research has prototyped new methods to provide efficient resource sharing, synchronization, and protection in a many-core compute node. We have experimented with alternative task/dataflow programming models and shown scalability in some cases to hundreds of thousands of cores. Much of our software is in active development through open source projects. Concepts from FOX are being pursued in next generation exascale operating systems. Our OS work focused on adaptive, application tailored OS services optimized for multi → many core processors. We developed a new operating system NIX that supports role-based allocation of cores to processes which was released to open source. We contributed to the IBM FusedOS project, which promoted the concept of latency-optimized and throughput-optimized cores. We built a task queue library based on distributed, fault tolerant key-value store and identified scaling issues. A second fault tolerant task parallel library was developed, based on the Linda tuple space model, that used low level interconnect primitives for optimized communication. We designed fault tolerance mechanisms for task parallel computations

  11. A large-scale computer facility for computational aerodynamics

    International Nuclear Information System (INIS)

    Bailey, F.R.; Balhaus, W.F.

    1985-01-01

    The combination of computer system technology and numerical modeling have advanced to the point that computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. To provide for further advances in modeling of aerodynamic flow fields, NASA has initiated at the Ames Research Center the Numerical Aerodynamic Simulation (NAS) Program. The objective of the Program is to develop a leading-edge, large-scale computer facility, and make it available to NASA, DoD, other Government agencies, industry and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. The Program will establish an initial operational capability in 1986 and systematically enhance that capability by incorporating evolving improvements in state-of-the-art computer system technologies as required to maintain a leadership role. This paper briefly reviews the present and future requirements for computational aerodynamics and discusses the Numerical Aerodynamic Simulation Program objectives, computational goals, and implementation plans

  12. Extreme Scale FMM-Accelerated Boundary Integral Equation Solver for Wave Scattering

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed

    2018-03-27

    Algorithmic and architecture-oriented optimizations are essential for achieving performance worthy of anticipated energy-austere exascale systems. In this paper, we present an extreme scale FMM-accelerated boundary integral equation solver for wave scattering, which uses FMM as a matrix-vector multiplication inside the GMRES iterative method. Our FMM Helmholtz kernels treat nontrivial singular and near-field integration points. We implement highly optimized kernels for both shared and distributed memory, targeting emerging Intel extreme performance HPC architectures. We extract the potential thread- and data-level parallelism of the key Helmholtz kernels of FMM. Our application code is well optimized to exploit the AVX-512 SIMD units of Intel Skylake and Knights Landing architectures. We provide different performance models for tuning the task-based tree traversal implementation of FMM, and develop optimal architecture-specific and algorithm aware partitioning, load balancing, and communication reducing mechanisms to scale up to 6,144 compute nodes of a Cray XC40 with 196,608 hardware cores. With shared memory optimizations, we achieve roughly 77% of peak single precision floating point performance of a 56-core Skylake processor, and on average 60% of peak single precision floating point performance of a 72-core KNL. These numbers represent nearly 5.4x and 10x speedup on Skylake and KNL, respectively, compared to the baseline scalar code. With distributed memory optimizations, on the other hand, we report near-optimal efficiency in the weak scalability study with respect to both the logarithmic communication complexity as well as the theoretical scaling complexity of FMM. In addition, we exhibit up to 85% efficiency in strong scaling. We compute in excess of 2 billion DoF on the full-scale of the Cray XC40 supercomputer.

  13. Data co-processing for extreme scale analysis level II ASC milestone (4745).

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, David; Moreland, Kenneth D.; Oldfield, Ron A.; Fabian, Nathan D.

    2013-03-01

    Exascale supercomputing will embody many revolutionary changes in the hardware and software of high-performance computing. A particularly pressing issue is gaining insight into the science behind the exascale computations. Power and I/O speed con- straints will fundamentally change current visualization and analysis work ows. A traditional post-processing work ow involves storing simulation results to disk and later retrieving them for visualization and data analysis. However, at exascale, scien- tists and analysts will need a range of options for moving data to persistent storage, as the current o ine or post-processing pipelines will not be able to capture the data necessary for data analysis of these extreme scale simulations. This Milestone explores two alternate work ows, characterized as in situ and in transit, and compares them. We nd each to have its own merits and faults, and we provide information to help pick the best option for a particular use.

  14. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  15. Spatial extreme value analysis to project extremes of large-scale indicators for severe weather.

    Science.gov (United States)

    Gilleland, Eric; Brown, Barbara G; Ammann, Caspar M

    2013-09-01

    Concurrently high values of the maximum potential wind speed of updrafts ( W max ) and 0-6 km wind shear (Shear) have been found to represent conducive environments for severe weather, which subsequently provides a way to study severe weather in future climates. Here, we employ a model for the product of these variables (WmSh) from the National Center for Atmospheric Research/United States National Center for Environmental Prediction reanalysis over North America conditioned on their having extreme energy in the spatial field in order to project the predominant spatial patterns of WmSh. The approach is based on the Heffernan and Tawn conditional extreme value model. Results suggest that this technique estimates the spatial behavior of WmSh well, which allows for exploring possible changes in the patterns over time. While the model enables a method for inferring the uncertainty in the patterns, such analysis is difficult with the currently available inference approach. A variation of the method is also explored to investigate how this type of model might be used to qualitatively understand how the spatial patterns of WmSh correspond to extreme river flow events. A case study for river flows from three rivers in northwestern Tennessee is studied, and it is found that advection of WmSh from the Gulf of Mexico prevails while elsewhere, WmSh is generally very low during such extreme events. © 2013 The Authors. Environmetrics published by JohnWiley & Sons, Ltd.

  16. Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Biros, George [Univ. of Texas, Austin, TX (United States)

    2018-01-12

    Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. These include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a

  17. A Computer-Based Visual Analog Scale,

    Science.gov (United States)

    1992-06-01

    34 keys on the computer keyboard or other input device. The initial position of the arrow is always in the center of the scale to prevent biasing the...3 REFERENCES 1. Gift, A.G., "Visual Analogue Scales: Measurement of Subjective Phenomena." Nursing Research, Vol. 38, pp. 286-288, 1989. 2. Ltmdberg...3. Menkes, D.B., Howard, R.C., Spears, G.F., and Cairns, E.R., "Salivary THC Following Cannabis Smoking Correlates With Subjective Intoxication and

  18. Adaptation to extreme climate events at a regional scale

    OpenAIRE

    Hoffmann, Christin

    2017-01-01

    A significant increase of the frequency, the intensity and the duration of extreme climate events in Switzerland induces the need to find a strategy to deal with the damages they cause. For more than two decades, mitigation has been the main objective of climate policy. However, due to already high atmospheric carbon concentrations and the inertia of the climate system, climate change is unavoidable to some degree, even if today’s emissions were almost completely cut back. Along with the high...

  19. ExM:System Support for Extreme-Scale, Many-Task Applications

    Energy Technology Data Exchange (ETDEWEB)

    Katz, Daniel S

    2011-05-31

    The ever-increasing power of supercomputer systems is both driving and enabling the emergence of new problem-solving methods that require the effi cient execution of many concurrent and interacting tasks. Methodologies such as rational design (e.g., in materials science), uncertainty quanti fication (e.g., in engineering), parameter estimation (e.g., for chemical and nuclear potential functions, and in economic energy systems modeling), massive dynamic graph pruning (e.g., in phylogenetic searches), Monte-Carlo- based iterative fi xing (e.g., in protein structure prediction), and inverse modeling (e.g., in reservoir simulation) all have these requirements. These many-task applications frequently have aggregate computing needs that demand the fastest computers. For example, proposed next-generation climate model ensemble studies will involve 1,000 or more runs, each requiring 10,000 cores for a week, to characterize model sensitivity to initial condition and parameter uncertainty. The goal of the ExM project is to achieve the technical advances required to execute such many-task applications efficiently, reliably, and easily on petascale and exascale computers. In this way, we will open up extreme-scale computing to new problem solving methods and application classes. In this document, we report on combined technical progress of the collaborative ExM project, and the institutional financial status of the portion of the project at University of Chicago, over the rst 8 months (through April 30, 2011)

  20. [Upper extremities, neck and back symptoms in office employees working at computer stations].

    Science.gov (United States)

    Zejda, Jan E; Bugajska, Joanna; Kowalska, Małgorzata; Krzych, Lukasz; Mieszkowska, Marzena; Brozek, Grzegorz; Braczkowska, Bogumiła

    2009-01-01

    To obtain current data on the occurrence ofwork-related symptoms of office computer users in Poland we implemented a questionnaire survey. Its goal was to assess the prevalence and intensity of symptoms of upper extremities, neck and back in office workers who use computers on a regular basis, and to find out if the occurrence of symptoms depends on the duration of computer use and other work-related factors. Office workers in two towns (Warszawa and Katowice), employed in large social services companies, were invited to fill in the Polish version of Nordic Questionnaire. The questions included work history and history of last-week symptoms of pain of hand/wrist, elbow, arm, neck and upper and lower back (occurrence and intensity measured by visual scale). Altogether 477 men and women returned the completed questionnaires. Between-group symptom differences (chi-square test) were verified by multivariate analysis (GLM). The prevalence of symptoms in individual body parts was as follows: neck, 55.6%; arm, 26.9%; elbow, 13.3%; wrist/hand, 29.9%; upper back, 49.6%; and lower back, 50.1%. Multivariate analysis confirmed the effect of gender, age and years of computer use on the occurrence of symptoms. Among other determinants, forearm support explained pain of wrist/hand, wrist support of elbow pain, and chair adjustment of arm pain. Association was also found between low back pain and chair adjustment and keyboard position. The findings revealed frequent occurrence of symptoms of pain in upper extremities and neck in office workers who use computers on a regular basis. Seating position could also contribute to the frequent occurrence of back pain in the examined population.

  1. Effects of ergonomic intervention on work-related upper extremity musculoskeletal disorders among computer workers: a randomized controlled trial.

    Science.gov (United States)

    Esmaeilzadeh, Sina; Ozcan, Emel; Capan, Nalan

    2014-01-01

    The aim of the study was to determine effects of ergonomic intervention on work-related upper extremity musculoskeletal disorders (WUEMSDs) among computer workers. Four hundred computer workers answered a questionnaire on work-related upper extremity musculoskeletal symptoms (WUEMSS). Ninety-four subjects with WUEMSS using computers at least 3 h a day participated in a prospective, randomized controlled 6-month intervention. Body posture and workstation layouts were assessed by the Ergonomic Questionnaire. We used the Visual Analogue Scale to assess the intensity of WUEMSS. The Upper Extremity Function Scale was used to evaluate functional limitations at the neck and upper extremities. Health-related quality of life was assessed with the Short Form-36. After baseline assessment, those in the intervention group participated in a multicomponent ergonomic intervention program including a comprehensive ergonomic training consisting of two interactive sessions, an ergonomic training brochure, and workplace visits with workstation adjustments. Follow-up assessment was conducted after 6 months. In the intervention group, body posture (p 0.05). Ergonomic intervention programs may be effective in reducing ergonomic risk factors among computer workers and consequently in the secondary prevention of WUEMSDs.

  2. Exploring Asynchronous Many-Task Runtime Systems toward Extreme Scales

    Energy Technology Data Exchange (ETDEWEB)

    Knight, Samuel [O8953; Baker, Gavin Matthew; Gamell, Marc [Rutgers U; Hollman, David [08953; Sjaardema, Gregor [SNL; Kolla, Hemanth [SNL; Teranishi, Keita; Wilke, Jeremiah J; Slattengren, Nicole [SNL; Bennett, Janine Camille

    2015-10-01

    Major exascale computing reports indicate a number of software challenges to meet the dramatic change of system architectures in near future. While several-orders-of-magnitude increase in parallelism is the most commonly cited of those, hurdles also include performance heterogeneity of compute nodes across the system, increased imbalance between computational capacity and I/O capabilities, frequent system interrupts, and complex hardware architectures. Asynchronous task-parallel programming models show a great promise in addressing these issues, but are not yet fully understood nor developed su ciently for computational science and engineering application codes. We address these knowledge gaps through quantitative and qualitative exploration of leading candidate solutions in the context of engineering applications at Sandia. In this poster, we evaluate MiniAero code ported to three leading candidate programming models (Charm++, Legion and UINTAH) to examine the feasibility of these models that permits insertion of new programming model elements into an existing code base.

  3. Scalable ParaView for Extreme Scale Visualization, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Petscale computing is leading to significant breakthroughs in a number of fields and is revolutionizing the way science is conducted. Data is not knowledge, however,...

  4. Enabling Structured Exploration of Workflow Performance Variability in Extreme-Scale Environments

    Energy Technology Data Exchange (ETDEWEB)

    Kleese van Dam, Kerstin; Stephan, Eric G.; Raju, Bibi; Altintas, Ilkay; Elsethagen, Todd O.; Krishnamoorthy, Sriram

    2015-11-15

    Workflows are taking an Workflows are taking an increasingly important role in orchestrating complex scientific processes in extreme scale and highly heterogeneous environments. However, to date we cannot reliably predict, understand, and optimize workflow performance. Sources of performance variability and in particular the interdependencies of workflow design, execution environment and system architecture are not well understood. While there is a rich portfolio of tools for performance analysis, modeling and prediction for single applications in homogenous computing environments, these are not applicable to workflows, due to the number and heterogeneity of the involved workflow and system components and their strong interdependencies. In this paper, we investigate workflow performance goals and identify factors that could have a relevant impact. Based on our analysis, we propose a new workflow performance provenance ontology, the Open Provenance Model-based WorkFlow Performance Provenance, or OPM-WFPP, that will enable the empirical study of workflow performance characteristics and variability including complex source attribution.

  5. Computer simulations for the nano-scale

    International Nuclear Information System (INIS)

    Stich, I.

    2007-01-01

    A review of methods for computations for the nano-scale is presented. The paper should provide a convenient starting point into computations for the nano-scale as well as a more in depth presentation for those already working in the field of atomic/molecular-scale modeling. The argument is divided in chapters covering the methods for description of the (i) electrons, (ii) ions, and (iii) techniques for efficient solving of the underlying equations. A fairly broad view is taken covering the Hartree-Fock approximation, density functional techniques and quantum Monte-Carlo techniques for electrons. The customary quantum chemistry methods, such as post Hartree-Fock techniques, are only briefly mentioned. Description of both classical and quantum ions is presented. The techniques cover Ehrenfest, Born-Oppenheimer, and Car-Parrinello dynamics. The strong and weak points of both principal and technical nature are analyzed. In the second part we introduce a number of applications to demonstrate the different approximations and techniques introduced in the first part. They cover a wide range of applications such as non-simple liquids, surfaces, molecule-surface interactions, applications in nano technology, etc. These more in depth presentations, while certainly not exhaustive, should provide information on technical aspects of the simulations, typical parameters used, and ways of analysis of the huge amounts of data generated in these large-scale supercomputer simulations. (author)

  6. Identification of large-scale meteorological patterns associated with extreme precipitation in the US northeast

    Science.gov (United States)

    Agel, Laurie; Barlow, Mathew; Feldstein, Steven B.; Gutowski, William J.

    2018-03-01

    Patterns of daily large-scale circulation associated with Northeast US extreme precipitation are identified using both k-means clustering (KMC) and Self-Organizing Maps (SOM) applied to tropopause height. The tropopause height provides a compact representation of the upper-tropospheric potential vorticity, which is closely related to the overall evolution and intensity of weather systems. Extreme precipitation is defined as the top 1% of daily wet-day observations at 35 Northeast stations, 1979-2008. KMC is applied on extreme precipitation days only, while the SOM algorithm is applied to all days in order to place the extreme results into the overall context of patterns for all days. Six tropopause patterns are identified through KMC for extreme day precipitation: a summertime tropopause ridge, a summertime shallow trough/ridge, a summertime shallow eastern US trough, a deeper wintertime eastern US trough, and two versions of a deep cold-weather trough located across the east-central US. Thirty SOM patterns for all days are identified. Results for all days show that 6 SOM patterns account for almost half of the extreme days, although extreme precipitation occurs in all SOM patterns. The same SOM patterns associated with extreme precipitation also routinely produce non-extreme precipitation; however, on extreme precipitation days the troughs, on average, are deeper and the downstream ridges more pronounced. Analysis of other fields associated with the large-scale patterns show various degrees of anomalously strong moisture transport preceding, and upward motion during, extreme precipitation events.

  7. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Charara, Ali; Keyes, David E.

    2018-01-01

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  8. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir

    2018-02-24

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  9. Computing the distribution of return levels of extreme warm temperatures for future climate projections

    Energy Technology Data Exchange (ETDEWEB)

    Pausader, M.; Parey, S.; Nogaj, M. [EDF/R and D, Chatou Cedex (France); Bernie, D. [Met Office Hadley Centre, Exeter (United Kingdom)

    2012-03-15

    In order to take into account uncertainties in the future climate projections there is a growing demand for probabilistic projections of climate change. This paper presents a methodology for producing such a probabilistic analysis of future temperature extremes. The 20- and 100-years return levels are obtained from that of the normalized variable and the changes in mean and standard deviation given by climate models for the desired future periods. Uncertainty in future change of these extremes is quantified using a multi-model ensemble and a perturbed physics ensemble. The probability density functions of future return levels are computed at a representative location from the joint probability distribution of mean and standard deviation changes given by the two combined ensembles of models. For the studied location, the 100-years return level at the end of the century is lower than 41 C with an 80% confidence. Then, as the number of model simulations is low to compute a reliable distribution, two techniques proposed in the literature (local pattern scaling and ANOVA) have been used to infer the changes in mean and standard deviation for the combinations of RCM and GCM which have not been run. The ANOVA technique leads to better results for the reconstruction of the mean changes, whereas the two methods fail to correctly infer the changes in standard deviation. As standard deviation change has a major impact on return level change, there is a need to improve the models and the different techniques regarding the variance changes. (orig.)

  10. Durango: Scalable Synthetic Workload Generation for Extreme-Scale Application Performance Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Carothers, Christopher D. [Rensselaer Polytechnic Institute (RPI); Meredith, Jeremy S. [ORNL; Blanco, Marc [Rensselaer Polytechnic Institute (RPI); Vetter, Jeffrey S. [ORNL; Mubarak, Misbah [Argonne National Laboratory; LaPre, Justin [Rensselaer Polytechnic Institute (RPI); Moore, Shirley V. [ORNL

    2017-05-01

    Performance modeling of extreme-scale applications on accurate representations of potential architectures is critical for designing next generation supercomputing systems because it is impractical to construct prototype systems at scale with new network hardware in order to explore designs and policies. However, these simulations often rely on static application traces that can be difficult to work with because of their size and lack of flexibility to extend or scale up without rerunning the original application. To address this problem, we have created a new technique for generating scalable, flexible workloads from real applications, we have implemented a prototype, called Durango, that combines a proven analytical performance modeling language, Aspen, with the massively parallel HPC network modeling capabilities of the CODES framework.Our models are compact, parameterized and representative of real applications with computation events. They are not resource intensive to create and are portable across simulator environments. We demonstrate the utility of Durango by simulating the LULESH application in the CODES simulation environment on several topologies and show that Durango is practical to use for simulation without loss of fidelity, as quantified by simulation metrics. During our validation of Durango's generated communication model of LULESH, we found that the original LULESH miniapp code had a latent bug where the MPI_Waitall operation was used incorrectly. This finding underscores the potential need for a tool such as Durango, beyond its benefits for flexible workload generation and modeling.Additionally, we demonstrate the efficacy of Durango's direct integration approach, which links Aspen into CODES as part of the running network simulation model. Here, Aspen generates the application-level computation timing events, which in turn drive the start of a network communication phase. Results show that Durango's performance scales well when

  11. Web-based Visual Analytics for Extreme Scale Climate Science

    Energy Technology Data Exchange (ETDEWEB)

    Steed, Chad A [ORNL; Evans, Katherine J [ORNL; Harney, John F [ORNL; Jewell, Brian C [ORNL; Shipman, Galen M [ORNL; Smith, Brian E [ORNL; Thornton, Peter E [ORNL; Williams, Dean N. [Lawrence Livermore National Laboratory (LLNL)

    2014-01-01

    In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via new visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.

  12. Understanding convective extreme precipitation scaling using observations and an entraining plume model

    NARCIS (Netherlands)

    Loriaux, J.M.; Lenderink, G.; De Roode, S.R.; Siebesma, A.P.

    2013-01-01

    Previously observed twice-Clausius–Clapeyron (2CC) scaling for extreme precipitation at hourly time scales has led to discussions about its origin. The robustness of this scaling is assessed by analyzing a subhourly dataset of 10-min resolution over the Netherlands. The results confirm the validity

  13. Large-scale Meteorological Patterns Associated with Extreme Precipitation Events over Portland, OR

    Science.gov (United States)

    Aragon, C.; Loikith, P. C.; Lintner, B. R.; Pike, M.

    2017-12-01

    Extreme precipitation events can have profound impacts on human life and infrastructure, with broad implications across a range of stakeholders. Changes to extreme precipitation events are a projected outcome of climate change that warrants further study, especially at regional- to local-scales. While global climate models are generally capable of simulating mean climate at global-to-regional scales with reasonable skill, resiliency and adaptation decisions are made at local-scales where most state-of-the-art climate models are limited by coarse resolution. Characterization of large-scale meteorological patterns associated with extreme precipitation events at local-scales can provide climatic information without this scale limitation, thus facilitating stakeholder decision-making. This research will use synoptic climatology as a tool by which to characterize the key large-scale meteorological patterns associated with extreme precipitation events in the Portland, Oregon metro region. Composite analysis of meteorological patterns associated with extreme precipitation days, and associated watershed-specific flooding, is employed to enhance understanding of the climatic drivers behind such events. The self-organizing maps approach is then used to characterize the within-composite variability of the large-scale meteorological patterns associated with extreme precipitation events, allowing us to better understand the different types of meteorological conditions that lead to high-impact precipitation events and associated hydrologic impacts. A more comprehensive understanding of the meteorological drivers of extremes will aid in evaluation of the ability of climate models to capture key patterns associated with extreme precipitation over Portland and to better interpret projections of future climate at impact-relevant scales.

  14. Domain Decomposition for Computing Extremely Low Frequency Induced Current in the Human Body

    OpenAIRE

    Perrussel , Ronan; Voyer , Damien; Nicolas , Laurent; Scorretti , Riccardo; Burais , Noël

    2011-01-01

    International audience; Computation of electromagnetic fields in high resolution computational phantoms requires solving large linear systems. We present an application of Schwarz preconditioners with Krylov subspace methods for computing extremely low frequency induced fields in a phantom issued from the Visible Human.

  15. Extreme daily precipitation in Western Europe with climate change at appropriate spatial scales

    NARCIS (Netherlands)

    Booij, Martijn J.

    2002-01-01

    Extreme daily precipitation for the current and changed climate at appropriate spatial scales is assessed. This is done in the context of the impact of climate change on flooding in the river Meuse in Western Europe. The objective is achieved by determining and comparing extreme precipitation from

  16. Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    Energy Technology Data Exchange (ETDEWEB)

    Ghanem, Roger [Univ. of Southern California, Los Angeles, CA (United States)

    2017-04-18

    QUEST was a SciDAC Institute comprising Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, the Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University. The mission of QUEST is to: (1) develop a broad class of uncertainty quantification (UQ) methods/tools, and (2) provide UQ expertise and software to other SciDAC projects, thereby enabling/guiding their UQ activities. The USC effort centered on the development of reduced models and efficient algorithms for implementing various components of the UQ pipeline. USC personnel were responsible for the development of adaptive bases, adaptive quadrature, and reduced models to be used in estimation and inference.

  17. Computer Aided Design Tools for Extreme Environment Electronics, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project aims to provide Computer Aided Design (CAD) tools for radiation-tolerant, wide-temperature-range digital, analog, mixed-signal, and radio-frequency...

  18. Computer tomography for rare soft tissue tumours of the extremities

    International Nuclear Information System (INIS)

    Boettger, E.; Semerak, M.; Stoltze, D.; Rossak, K.

    1979-01-01

    Five patients with undiagnosed soft tissue masses in the extremities were examined and in two a pathological diagnosis could be made. One was an extensive, invasive fibroma (desmoid) 22 cm long which could be followed from the thigh almost into the pelvis. It was sharply demarkated form the surrounding muscles and of higher density. The second case was a 12 cm long cavernous haemangioma in the semi-membranosus muscle. This was originally hypo-dense, but showed marked increase in its density after the administration of contrast. (orig.) [de

  19. A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States)

    2014-11-01

    The evolution of the computing world from teraflop to petaflop has been relatively effortless, with several of the existing programming models scaling effectively to the petascale. The migration to exascale, however, poses considerable challenges. All industry trends infer that the exascale machine will be built using processors containing hundreds to thousands of cores per chip. It can be inferred that efficient concurrency on exascale machines requires a massive amount of concurrent threads, each performing many operations on a localized piece of data. Currently, visualization libraries and applications are based off what is known as the visualization pipeline. In the pipeline model, algorithms are encapsulated as filters with inputs and outputs. These filters are connected by setting the output of one component to the input of another. Parallelism in the visualization pipeline is achieved by replicating the pipeline for each processing thread. This works well for today’s distributed memory parallel computers but cannot be sustained when operating on processors with thousands of cores. Our project investigates a new visualization framework designed to exhibit the pervasive parallelism necessary for extreme scale machines. Our framework achieves this by defining algorithms in terms of worklets, which are localized stateless operations. Worklets are atomic operations that execute when invoked unlike filters, which execute when a pipeline request occurs. The worklet design allows execution on a massive amount of lightweight threads with minimal overhead. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale machine.

  20. A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.

    Science.gov (United States)

    Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua

    2016-05-01

    Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.0)

    Energy Technology Data Exchange (ETDEWEB)

    Hukerikar, Saurabh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Engelmann, Christian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-10-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest that very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Practical limits on power consumption in HPC systems will require future systems to embrace innovative architectures, increasing the levels of hardware and software complexities. The resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. These techniques must seek to improve resilience at reasonable overheads to power consumption and performance. While the HPC community has developed various solutions, application-level as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power eciency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software ecosystems, which are expected to be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience based on the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. The catalog of resilience design patterns provides designers with reusable design elements. We define a design framework that enhances our understanding of the important

  2. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.1)

    Energy Technology Data Exchange (ETDEWEB)

    Hukerikar, Saurabh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Engelmann, Christian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-12-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Therefore the resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. Also, due to practical limits on power consumption in HPC systems future systems are likely to embrace innovative architectures, increasing the levels of hardware and software complexities. As a result the techniques that seek to improve resilience must navigate the complex trade-off space between resilience and the overheads to power consumption and performance. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power efficiency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience using the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. Each established solution is described in the form of a pattern that

  3. On the nonlinearity of spatial scales in extreme weather attribution statements

    Science.gov (United States)

    Angélil, Oliver; Stone, Daíthí; Perkins-Kirkpatrick, Sarah; Alexander, Lisa V.; Wehner, Michael; Shiogama, Hideo; Wolski, Piotr; Ciavarella, Andrew; Christidis, Nikolaos

    2018-04-01

    In the context of ongoing climate change, extreme weather events are drawing increasing attention from the public and news media. A question often asked is how the likelihood of extremes might have changed by anthropogenic greenhouse-gas emissions. Answers to the question are strongly influenced by the model used, duration, spatial extent, and geographic location of the event—some of these factors often overlooked. Using output from four global climate models, we provide attribution statements characterised by a change in probability of occurrence due to anthropogenic greenhouse-gas emissions, for rainfall and temperature extremes occurring at seven discretised spatial scales and three temporal scales. An understanding of the sensitivity of attribution statements to a range of spatial and temporal scales of extremes allows for the scaling of attribution statements, rendering them relevant to other extremes having similar but non-identical characteristics. This is a procedure simple enough to approximate timely estimates of the anthropogenic contribution to the event probability. Furthermore, since real extremes do not have well-defined physical borders, scaling can help quantify uncertainty around attribution results due to uncertainty around the event definition. Results suggest that the sensitivity of attribution statements to spatial scale is similar across models and that the sensitivity of attribution statements to the model used is often greater than the sensitivity to a doubling or halving of the spatial scale of the event. The use of a range of spatial scales allows us to identify a nonlinear relationship between the spatial scale of the event studied and the attribution statement.

  4. Brief Assessment of Motor Function: Content Validity and Reliability of the Upper Extremity Gross Motor Scale

    Science.gov (United States)

    Cintas, Holly Lea; Parks, Rebecca; Don, Sarah; Gerber, Lynn

    2011-01-01

    Content validity and reliability of the Brief Assessment of Motor Function (BAMF) Upper Extremity Gross Motor Scale (UEGMS) were evaluated in this prospective, descriptive study. The UEGMS is one of five BAMF ordinal scales designed for quick documentation of gross, fine, and oral motor skill levels. Designed to be independent of age and…

  5. Lightweight computational steering of very large scale molecular dynamics simulations

    International Nuclear Information System (INIS)

    Beazley, D.M.

    1996-01-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages

  6. Extreme-Scale Alignments Of Quasar Optical Polarizations And Galactic Dust Contamination

    Science.gov (United States)

    Pelgrims, Vincent

    2017-10-01

    Almost twenty years ago the optical polarization vectors from quasars were shown to be aligned over extreme-scales. That evidence was later confirmed and enhanced thanks to additional optical data obtained with the ESO instrument FORS2 mounted on the VLT, in Chile. These observations suggest either Galactic foreground contamination of the data or, more interestingly, a cosmological origin. Using 353-GHz polarization data from the Planck satellite, I recently showed that the main features of the extreme-scale alignments of the quasar optical polarization vectors are unaffected by the Galactic thermal dust. This confirms previous studies based on optical starlight polarization and discards the scenario of Galactic contamination. In this talk, I shall briefly review the extreme-scale quasar polarization alignments, discuss the main results submitted in A&A and motivate forthcoming projects at the frontier between Galactic and extragalactic astrop hysics.

  7. United States Temperature and Precipitation Extremes: Phenomenology, Large-Scale Organization, Physical Mechanisms and Model Representation

    Science.gov (United States)

    Black, R. X.

    2017-12-01

    We summarize results from a project focusing on regional temperature and precipitation extremes over the continental United States. Our project introduces a new framework for evaluating these extremes emphasizing their (a) large-scale organization, (b) underlying physical sources (including remote-excitation and scale-interaction) and (c) representation in climate models. Results to be reported include the synoptic-dynamic behavior, seasonality and secular variability of cold waves, dry spells and heavy rainfall events in the observational record. We also study how the characteristics of such extremes are systematically related to Northern Hemisphere planetary wave structures and thus planetary- and hemispheric-scale forcing (e.g., those associated with major El Nino events and Arctic sea ice change). The underlying physics of event onset are diagnostically quantified for different categories of events. Finally, the representation of these extremes in historical coupled climate model simulations is studied and the origins of model biases are traced using new metrics designed to assess the large-scale atmospheric forcing of local extremes.

  8. Scaling of precipitation extremes with temperature in the French Mediterranean region: What explains the hook shape?

    Science.gov (United States)

    Drobinski, P.; Alonzo, B.; Bastin, S.; Silva, N. Da; Muller, C.

    2016-04-01

    Expected changes to future extreme precipitation remain a key uncertainty associated with anthropogenic climate change. Extreme precipitation has been proposed to scale with the precipitable water content in the atmosphere. Assuming constant relative humidity, this implies an increase of precipitation extremes at a rate of about 7% °C-1 globally as indicated by the Clausius-Clapeyron relationship. Increases faster and slower than Clausius-Clapeyron have also been reported. In this work, we examine the scaling between precipitation extremes and temperature in the present climate using simulations and measurements from surface weather stations collected in the frame of the HyMeX and MED-CORDEX programs in Southern France. Of particular interest are departures from the Clausius-Clapeyron thermodynamic expectation, their spatial and temporal distribution, and their origin. Looking at the scaling of precipitation extreme with temperature, two regimes emerge which form a hook shape: one at low temperatures (cooler than around 15°C) with rates of increase close to the Clausius-Clapeyron rate and one at high temperatures (warmer than about 15°C) with sub-Clausius-Clapeyron rates and most often negative rates. On average, the region of focus does not seem to exhibit super Clausius-Clapeyron behavior except at some stations, in contrast to earlier studies. Many factors can contribute to departure from Clausius-Clapeyron scaling: time and spatial averaging, choice of scaling temperature (surface versus condensation level), and precipitation efficiency and vertical velocity in updrafts that are not necessarily constant with temperature. But most importantly, the dynamical contribution of orography to precipitation in the fall over this area during the so-called "Cevenoles" events, explains the hook shape of the scaling of precipitation extremes.

  9. A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Kwan-Liu [Univ. of California, Davis, CA (United States)

    2017-02-01

    efficient computation on an exascale computer. This project concludes with a functional prototype containing pervasively parallel algorithms that perform demonstratively well on many-core processors. These algorithms are fundamental for performing data analysis and visualization at extreme scale.

  10. Forcings and feedbacks on convection in the 2010 Pakistan flood: Modeling extreme precipitation with interactive large-scale ascent

    Science.gov (United States)

    Nie, Ji; Shaevitz, Daniel A.; Sobel, Adam H.

    2016-09-01

    Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. The causal relationships between these factors are often not obvious, however, the roles of different physical processes in producing the extreme precipitation event can be difficult to disentangle. Here we examine the large-scale forcings and convective heating feedback in the precipitation events, which caused the 2010 Pakistan flood within the Column Quasi-Geostrophic framework. A cloud-revolving model (CRM) is forced with large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation using input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. Numerical results show that the positive feedback of convective heating to large-scale dynamics is essential in amplifying the precipitation intensity to the observed values. Orographic lifting is the most important dynamic forcing in both events, while differential potential vorticity advection also contributes to the triggering of the first event. Horizontal moisture advection modulates the extreme events mainly by setting the environmental humidity, which modulates the amplitude of the convection's response to the dynamic forcings. When the CRM is replaced by either a single-column model (SCM) with parameterized convection or a dry model with a reduced effective static stability, the model results show substantial discrepancies compared with reanalysis data. The reasons for these discrepancies are examined, and the implications for global models and theoretical models are discussed.

  11. The Relationship between Spatial and Temporal Magnitude Estimation of Scientific Concepts at Extreme Scales

    Science.gov (United States)

    Price, Aaron; Lee, H.

    2010-01-01

    Many astronomical objects, processes, and events exist and occur at extreme scales of spatial and temporal magnitudes. Our research draws upon the psychological literature, replete with evidence of linguistic and metaphorical links between the spatial and temporal domains, to compare how students estimate spatial and temporal magnitudes associated with objects and processes typically taught in science class.. We administered spatial and temporal scale estimation tests, with many astronomical items, to 417 students enrolled in 12 undergraduate science courses. Results show that while the temporal test was more difficult, students’ overall performance patterns between the two tests were mostly similar. However, asymmetrical correlations between the two tests indicate that students think of the extreme ranges of spatial and temporal scales in different ways, which is likely influenced by their classroom experience. When making incorrect estimations, students tended to underestimate the difference between the everyday scale and the extreme scales on both tests. This suggests the use of a common logarithmic mental number line for both spatial and temporal magnitude estimation. However, there are differences between the two tests in the errors student make in the everyday range. Among the implications discussed is the use of spatio-temporal reference frames, instead of smooth bootstrapping, to help students maneuver between scales of magnitude and the use of logarithmic transformations between reference frames. Implications for astronomy range from learning about spectra to large scale galaxy structure.

  12. dV/dt - Accelerating the Rate of Progress towards Extreme Scale Collaborative Science

    Energy Technology Data Exchange (ETDEWEB)

    Livny, Miron [Univ. of Wisconsin, Madison, WI (United States)

    2018-01-22

    This report introduces publications that report the results of a project that aimed to design a computational framework that enables computational experimentation at scale while supporting the model of “submit locally, compute globally”. The project focuses on estimating application resource needs, finding the appropriate computing resources, acquiring those resources,deploying the applications and data on the resources, managing applications and resources during run.

  13. Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale

    International Nuclear Information System (INIS)

    Engelmann, Christian; Hukerikar, Saurabh

    2017-01-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across

  14. Regional-Scale High-Latitude Extreme Geoelectric Fields Pertaining to Geomagnetically Induced Currents

    Science.gov (United States)

    Pulkkinen, Antti; Bernabeu, Emanuel; Eichner, Jan; Viljanen, Ari; Ngwira, Chigomezyo

    2015-01-01

    Motivated by the needs of the high-voltage power transmission industry, we use data from the high-latitude IMAGE magnetometer array to study characteristics of extreme geoelectric fields at regional scales. We use 10-s resolution data for years 1993-2013, and the fields are characterized using average horizontal geoelectric field amplitudes taken over station groups that span about 500-km distance. We show that geoelectric field structures associated with localized extremes at single stations can be greatly different from structures associated with regionally uniform geoelectric fields, which are well represented by spatial averages over single stations. Visual extrapolation and rigorous extreme value analysis of spatially averaged fields indicate that the expected range for 1-in-100-year extreme events are 3-8 V/km and 3.4-7.1 V/km, respectively. The Quebec reference ground model is used in the calculations.

  15. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  16. Using Discrete Event Simulation for Programming Model Exploration at Extreme-Scale: Macroscale Components for the Structural Simulation Toolkit (SST).

    Energy Technology Data Exchange (ETDEWEB)

    Wilke, Jeremiah J [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Kenny, Joseph P. [Sandia National Laboratories (SNL-CA), Livermore, CA (United States)

    2015-02-01

    Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading framework allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.

  17. Computer work and musculoskeletal disorders of the neck and upper extremity: A systematic review

    Directory of Open Access Journals (Sweden)

    Veiersted Kaj Bo

    2010-04-01

    Full Text Available Abstract Background This review examines the evidence for an association between computer work and neck and upper extremity disorders (except carpal tunnel syndrome. Methods A systematic critical review of studies of computer work and musculoskeletal disorders verified by a physical examination was performed. Results A total of 22 studies (26 articles fulfilled the inclusion criteria. Results show limited evidence for a causal relationship between computer work per se, computer mouse and keyboard time related to a diagnosis of wrist tendonitis, and for an association between computer mouse time and forearm disorders. Limited evidence was also found for a causal relationship between computer work per se and computer mouse time related to tension neck syndrome, but the evidence for keyboard time was insufficient. Insufficient evidence was found for an association between other musculoskeletal diagnoses of the neck and upper extremities, including shoulder tendonitis and epicondylitis, and any aspect of computer work. Conclusions There is limited epidemiological evidence for an association between aspects of computer work and some of the clinical diagnoses studied. None of the evidence was considered as moderate or strong and there is a need for more and better documentation.

  18. A direct method for computing extreme value (Gumbel) parameters for gapped biological sequence alignments.

    Science.gov (United States)

    Quinn, Terrance; Sinkala, Zachariah

    2014-01-01

    We develop a general method for computing extreme value distribution (Gumbel, 1958) parameters for gapped alignments. Our approach uses mixture distribution theory to obtain associated BLOSUM matrices for gapped alignments, which in turn are used for determining significance of gapped alignment scores for pairs of biological sequences. We compare our results with parameters already obtained in the literature.

  19. Changes and Attribution of Extreme Precipitation in Climate Models: Subdaily and Daily Scales

    Science.gov (United States)

    Zhang, W.; Villarini, G.; Scoccimarro, E.; Vecchi, G. A.

    2017-12-01

    Extreme precipitation events are responsible for numerous hazards, including flooding, soil erosion, and landslides. Because of their significant socio-economic impacts, the attribution and projection of these events is of crucial importance to improve our response, mitigation and adaptation strategies. Here we present results from our ongoing work.In terms of attribution, we use idealized experiments [pre-industrial control experiment (PI) and 1% per year increase (1%CO2) in atmospheric CO2] from ten general circulation models produced under the Coupled Model Intercomparison Project Phase 5 (CMIP5) and the fraction of attributable risk to examine the CO2 effects on extreme precipitation at the sub-daily and daily scales. We find that the increased CO2 concentration substantially increases the odds of the occurrence of sub-daily precipitation extremes compared to the daily scale in most areas of the world, with the exception of some regions in the sub-tropics, likely in relation to the subsidence of the Hadley Cell. These results point to the large role that atmospheric CO2 plays in extreme precipitation under an idealized framework. Furthermore, we investigate the changes in extreme precipitation events with the Community Earth System Model (CESM) climate experiments using the scenarios consistent with the 1.5°C and 2°C temperature targets. We find that the frequency of annual extreme precipitation at a global scale increases in both 1.5°C and 2°C scenarios until around 2070, after which the magnitudes of the trend become much weaker or even negative. Overall, the frequency of global annual extreme precipitation is similar between 1.5°C and 2°C for the period 2006-2035, and the changes in extreme precipitation in individual seasons are consistent with those for the entire year. The frequency of extreme precipitation in the 2°C experiments is higher than for the 1.5°C experiment after the late 2030s, particularly for the period 2071-2100.

  20. Extreme value statistics and finite-size scaling at the ecological extinction/laminar-turbulence transition

    Science.gov (United States)

    Shih, Hong-Yan; Goldenfeld, Nigel

    Experiments on transitional turbulence in pipe flow seem to show that turbulence is a transient metastable state since the measured mean lifetime of turbulence puffs does not diverge asymptotically at a critical Reynolds number. Yet measurements reveal that the lifetime scales with Reynolds number in a super-exponential way reminiscent of extreme value statistics, and simulations and experiments in Couette and channel flow exhibit directed percolation type scaling phenomena near a well-defined transition. This universality class arises from the interplay between small-scale turbulence and a large-scale collective zonal flow, which exhibit predator-prey behavior. Why is asymptotically divergent behavior not observed? Using directed percolation and a stochastic individual level model of predator-prey dynamics related to transitional turbulence, we investigate the relation between extreme value statistics and power law critical behavior, and show that the paradox is resolved by carefully defining what is measured in the experiments. We theoretically derive the super-exponential scaling law, and using finite-size scaling, show how the same data can give both super-exponential behavior and power-law critical scaling.

  1. Assessing Regional Scale Variability in Extreme Value Statistics Under Altered Climate Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Brunsell, Nathaniel [Univ. of Kansas, Lawrence, KS (United States); Mechem, David [Univ. of Kansas, Lawrence, KS (United States); Ma, Chunsheng [Wichita State Univ., KS (United States)

    2015-02-20

    Recent studies have suggested that low-frequency modes of climate variability can significantly influence regional climate. The climatology associated with extreme events has been shown to be particularly sensitive. This has profound implications for droughts, heat waves, and food production. We propose to examine regional climate simulations conducted over the continental United States by applying a recently developed technique which combines wavelet multi–resolution analysis with information theory metrics. This research is motivated by two fundamental questions concerning the spatial and temporal structure of extreme events. These questions are 1) what temporal scales of the extreme value distributions are most sensitive to alteration by low-frequency climate forcings and 2) what is the nature of the spatial structure of variation in these timescales? The primary objective is to assess to what extent information theory metrics can be useful in characterizing the nature of extreme weather phenomena. Specifically, we hypothesize that (1) changes in the nature of extreme events will impact the temporal probability density functions and that information theory metrics will be sensitive these changes and (2) via a wavelet multi–resolution analysis, we will be able to characterize the relative contribution of different timescales on the stochastic nature of extreme events. In order to address these hypotheses, we propose a unique combination of an established regional climate modeling approach and advanced statistical techniques to assess the effects of low-frequency modes on climate extremes over North America. The behavior of climate extremes in RCM simulations for the 20th century will be compared with statistics calculated from the United States Historical Climatology Network (USHCN) and simulations from the North American Regional Climate Change Assessment Program (NARCCAP). This effort will serve to establish the baseline behavior of climate extremes, the

  2. More scalability, less pain: A simple programming model and its implementation for extreme computing

    International Nuclear Information System (INIS)

    Lusk, E.L.; Pieper, S.C.; Butler, R.M.

    2010-01-01

    This is the story of a simple programming model, its implementation for extreme computing, and a breakthrough in nuclear physics. A critical issue for the future of high-performance computing is the programming model to use on next-generation architectures. Described here is a promising approach: program very large machines by combining a simplified programming model with a scalable library implementation. The presentation takes the form of a case study in nuclear physics. The chosen application addresses fundamental issues in the origins of our Universe, while the library developed to enable this application on the largest computers may have applications beyond this one.

  3. Extreme scale multi-physics simulations of the tsunamigenic 2004 Sumatra megathrust earthquake

    Science.gov (United States)

    Ulrich, T.; Gabriel, A. A.; Madden, E. H.; Wollherr, S.; Uphoff, C.; Rettenberger, S.; Bader, M.

    2017-12-01

    SeisSol (www.seissol.org) is an open-source software package based on an arbitrary high-order derivative Discontinuous Galerkin method (ADER-DG). It solves spontaneous dynamic rupture propagation on pre-existing fault interfaces according to non-linear friction laws, coupled to seismic wave propagation with high-order accuracy in space and time (minimal dispersion errors). SeisSol exploits unstructured meshes to account for complex geometries, e.g. high resolution topography and bathymetry, 3D subsurface structure, and fault networks. We present the up-to-date largest (1500 km of faults) and longest (500 s) dynamic rupture simulation modeling the 2004 Sumatra-Andaman earthquake. We demonstrate the need for end-to-end-optimization and petascale performance of scientific software to realize realistic simulations on the extreme scales of subduction zone earthquakes: Considering the full complexity of subduction zone geometries leads inevitably to huge differences in element sizes. The main code improvements include a cache-aware wave propagation scheme and optimizations of the dynamic rupture kernels using code generation. In addition, a novel clustered local-time-stepping scheme for dynamic rupture has been established. Finally, asynchronous output has been implemented to overlap I/O and compute time. We resolve the frictional sliding process on the curved mega-thrust and a system of splay faults, as well as the seismic wave field and seafloor displacement with frequency content up to 2.2 Hz. We validate the scenario by geodetic, seismological and tsunami observations. The resulting rupture dynamics shed new light on the activation and importance of splay faults.

  4. Scaling ion traps for quantum computing

    CSIR Research Space (South Africa)

    Uys, H

    2010-09-01

    Full Text Available The design, fabrication and preliminary testing of a chipscale, multi-zone, surface electrode ion trap is reported. The modular design and fabrication techniques used are anticipated to advance scalability of ion trap quantum computing architectures...

  5. Combinations of large-scale circulation anomalies conducive to precipitation extremes in the Czech Republic

    Czech Academy of Sciences Publication Activity Database

    Kašpar, Marek; Müller, Miloslav

    2014-01-01

    Roč. 138, March 2014 (2014), s. 205-212 ISSN 0169-8095 R&D Projects: GA ČR(CZ) GAP209/11/1990 Institutional support: RVO:68378289 Keywords : precipitation extreme * synoptic-scale cause * re-analysis * circulation anomaly Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 2.844, year: 2014 http://www.sciencedirect.com/science/article/pii/S0169809513003372

  6. Extreme-scale alignments of quasar optical polarizations and Galactic dust contamination

    OpenAIRE

    Pelgrims, Vincent

    2017-01-01

    Almost twenty years ago the optical polarization vectors from quasars were shown to be aligned over extreme-scales. That evidence was later confirmed and enhanced thanks to additional optical data obtained with the ESO instrument FORS2 mounted on the VLT, in Chile. These observations suggest either Galactic foreground contamination of the data or, more interestingly, a cosmological origin. Using 353-GHz polarization data from the Planck satellite, I recently showed that the main features of t...

  7. Moths produce extremely quiet ultrasonic courtship songs by rubbing specialized scales

    DEFF Research Database (Denmark)

    Nakano, Ryo; Skals, Niels; Takanashi, Takuma

    2008-01-01

    level at 1 cm) adapted for private sexual communication in the Asian corn borer moth, Ostrinia furnacalis. During courtship, the male rubs specialized scales on the wing against those on the thorax to produce the songs, with the wing membrane underlying the scales possibly acting as a sound resonator....... The male's song suppresses the escape behavior of the female, thereby increasing his mating success. Our discovery of extremely low-intensity ultrasonic communication may point to a whole undiscovered world of private communication, using "quiet" ultrasound....

  8. Synchronization and Causality Across Time-scales: Complex Dynamics and Extremes in El Niño/Southern Oscillation

    Science.gov (United States)

    Jajcay, N.; Kravtsov, S.; Tsonis, A.; Palus, M.

    2017-12-01

    A better understanding of dynamics in complex systems, such as the Earth's climate is one of the key challenges for contemporary science and society. A large amount of experimental data requires new mathematical and computational approaches. Natural complex systems vary on many temporal and spatial scales, often exhibiting recurring patterns and quasi-oscillatory phenomena. The statistical inference of causal interactions and synchronization between dynamical phenomena evolving on different temporal scales is of vital importance for better understanding of underlying mechanisms and a key for modeling and prediction of such systems. This study introduces and applies information theory diagnostics to phase and amplitude time series of different wavelet components of the observed data that characterizes El Niño. A suite of significant interactions between processes operating on different time scales was detected, and intermittent synchronization among different time scales has been associated with the extreme El Niño events. The mechanisms of these nonlinear interactions were further studied in conceptual low-order and state-of-the-art dynamical, as well as statistical climate models. Observed and simulated interactions exhibit substantial discrepancies, whose understanding may be the key to an improved prediction. Moreover, the statistical framework which we apply here is suitable for direct usage of inferring cross-scale interactions in nonlinear time series from complex systems such as the terrestrial magnetosphere, solar-terrestrial interactions, seismic activity or even human brain dynamics.

  9. Influence of climate variability versus change at multi-decadal time scales on hydrological extremes

    Science.gov (United States)

    Willems, Patrick

    2014-05-01

    Recent studies have shown that rainfall and hydrological extremes do not randomly occur in time, but are subject to multidecadal oscillations. In addition to these oscillations, there are temporal trends due to climate change. Design statistics, such as intensity-duration-frequency (IDF) for extreme rainfall or flow-duration-frequency (QDF) relationships, are affected by both types of temporal changes (short term and long term). This presentation discusses these changes, how they influence water engineering design and decision making, and how this influence can be assessed and taken into account in practice. The multidecadal oscillations in rainfall and hydrological extremes were studied based on a technique for the identification and analysis of changes in extreme quantiles. The statistical significance of the oscillations was evaluated by means of a non-parametric bootstrapping method. Oscillations in large scale atmospheric circulation were identified as the main drivers for the temporal oscillations in rainfall and hydrological extremes. They also explain why spatial phase shifts (e.g. north-south variations in Europe) exist between the oscillation highs and lows. Next to the multidecadal climate oscillations, several stations show trends during the most recent decades, which may be attributed to climate change as a result of anthropogenic global warming. Such attribution to anthropogenic global warming is, however, uncertain. It can be done based on simulation results with climate models, but it is shown that the climate model results are too uncertain to enable a clear attribution. Water engineering design statistics, such as extreme rainfall IDF or peak or low flow QDF statistics, obviously are influenced by these temporal variations (oscillations, trends). It is shown in the paper, based on the Brussels 10-minutes rainfall data, that rainfall design values may be about 20% biased or different when based on short rainfall series of 10 to 15 years length, and

  10. Analysis of the Extremely Low Frequency Magnetic Field Emission from Laptop Computers

    Directory of Open Access Journals (Sweden)

    Brodić Darko

    2016-03-01

    Full Text Available This study addresses the problem of magnetic field emission produced by the laptop computers. Although, the magnetic field is spread over the entire frequency spectrum, the most dangerous part of it to the laptop users is the frequency range from 50 to 500 Hz, commonly called the extremely low frequency magnetic field. In this frequency region the magnetic field is characterized by high peak values. To examine the influence of laptop’s magnetic field emission in the office, a specific experiment is proposed. It includes the measurement of the magnetic field at six laptop’s positions, which are in close contact to its user. The results obtained from ten different laptop computers show the extremely high emission at some positions, which are dependent on the power dissipation or bad ergonomics. Eventually, the experiment extracts these dangerous positions of magnetic field emission and suggests possible solutions.

  11. Standing Together for Reproducibility in Large-Scale Computing: Report on reproducibility@XSEDE

    OpenAIRE

    James, Doug; Wilkins-Diehr, Nancy; Stodden, Victoria; Colbry, Dirk; Rosales, Carlos; Fahey, Mark; Shi, Justin; Silva, Rafael F.; Lee, Kyo; Roskies, Ralph; Loewe, Laurence; Lindsey, Susan; Kooper, Rob; Barba, Lorena; Bailey, David

    2014-01-01

    This is the final report on reproducibility@xsede, a one-day workshop held in conjunction with XSEDE14, the annual conference of the Extreme Science and Engineering Discovery Environment (XSEDE). The workshop's discussion-oriented agenda focused on reproducibility in large-scale computational research. Two important themes capture the spirit of the workshop submissions and discussions: (1) organizational stakeholders, especially supercomputer centers, are in a unique position to promote, enab...

  12. A Large-Scale Multi-Hop Localization Algorithm Based on Regularized Extreme Learning for Wireless Networks.

    Science.gov (United States)

    Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan

    2017-12-20

    A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.

  13. Large Scale Influences on Summertime Extreme Precipitation in the Northeastern United States

    Science.gov (United States)

    Collow, Allison B. Marquardt; Bosilovich, Michael G.; Koster, Randal Dean

    2016-01-01

    Observations indicate that over the last few decades there has been a statistically significant increase in precipitation in the northeastern United States and that this can be attributed to an increase in precipitation associated with extreme precipitation events. Here a state-of-the-art atmospheric reanalysis is used to examine such events in detail. Daily extreme precipitation events defined at the 75th and 95th percentile from gridded gauge observations are identified for a selected region within the Northeast. Atmospheric variables from the Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2), are then composited during these events to illustrate the time evolution of associated synoptic structures, with a focus on vertically integrated water vapor fluxes, sea level pressure, and 500-hectopascal heights. Anomalies of these fields move into the region from the northwest, with stronger anomalies present in the 95th percentile case. Although previous studies show tropical cyclones are responsible for the most intense extreme precipitation events, only 10 percent of the events in this study are caused by tropical cyclones. On the other hand, extreme events resulting from cutoff low pressure systems have increased. The time period of the study was divided in half to determine how the mean composite has changed over time. An arc of lower sea level pressure along the East Coast and a change in the vertical profile of equivalent potential temperature suggest a possible increase in the frequency or intensity of synoptic-scale baroclinic disturbances.

  14. Scale orientated analysis of river width changes due to extreme flood hazards

    Directory of Open Access Journals (Sweden)

    G. Krapesch

    2011-08-01

    Full Text Available This paper analyses the morphological effects of extreme floods (recurrence interval >100 years and examines which parameters best describe the width changes due to erosion based on 5 affected alpine gravel bed rivers in Austria. The research was based on vertical aerial photos of the rivers before and after extreme floods, hydrodynamic numerical models and cross sectional measurements supported by LiDAR data of the rivers. Average width ratios (width after/before the flood were calculated and correlated with different hydraulic parameters (specific stream power, shear stress, flow area, specific discharge. Depending on the geomorphological boundary conditions of the different rivers, a mean width ratio between 1.12 (Lech River and 3.45 (Trisanna River was determined on the reach scale. The specific stream power (SSP best predicted the mean width ratios of the rivers especially on the reach scale and sub reach scale. On the local scale more parameters have to be considered to define the "minimum morphological spatial demand of rivers", which is a crucial parameter for addressing and managing flood hazards and should be used in hazard zone plans and spatial planning.

  15. How do the multiple large-scale climate oscillations trigger extreme precipitation?

    Science.gov (United States)

    Shi, Pengfei; Yang, Tao; Xu, Chong-Yu; Yong, Bin; Shao, Quanxi; Li, Zhenya; Wang, Xiaoyan; Zhou, Xudong; Li, Shu

    2017-10-01

    Identifying the links between variations in large-scale climate patterns and precipitation is of tremendous assistance in characterizing surplus or deficit of precipitation, which is especially important for evaluation of local water resources and ecosystems in semi-humid and semi-arid regions. Restricted by current limited knowledge on underlying mechanisms, statistical correlation methods are often used rather than physical based model to characterize the connections. Nevertheless, available correlation methods are generally unable to reveal the interactions among a wide range of climate oscillations and associated effects on precipitation, especially on extreme precipitation. In this work, a probabilistic analysis approach by means of a state-of-the-art Copula-based joint probability distribution is developed to characterize the aggregated behaviors for large-scale climate patterns and their connections to precipitation. This method is employed to identify the complex connections between climate patterns (Atlantic Multidecadal Oscillation (AMO), El Niño-Southern Oscillation (ENSO) and Pacific Decadal Oscillation (PDO)) and seasonal precipitation over a typical semi-humid and semi-arid region, the Haihe River Basin in China. Results show that the interactions among multiple climate oscillations are non-uniform in most seasons and phases. Certain joint extreme phases can significantly trigger extreme precipitation (flood and drought) owing to the amplification effect among climate oscillations.

  16. Improving plot- and regional-scale crop models for simulating impacts of climate variability and extremes

    Science.gov (United States)

    Tao, F.; Rötter, R.

    2013-12-01

    Many studies on global climate report that climate variability is increasing with more frequent and intense extreme events1. There are quite large uncertainties from both the plot- and regional-scale models in simulating impacts of climate variability and extremes on crop development, growth and productivity2,3. One key to reducing the uncertainties is better exploitation of experimental data to eliminate crop model deficiencies and develop better algorithms that more adequately capture the impacts of extreme events, such as high temperature and drought, on crop performance4,5. In the present study, in a first step, the inter-annual variability in wheat yield and climate from 1971 to 2012 in Finland was investigated. Using statistical approaches the impacts of climate variability and extremes on wheat growth and productivity were quantified. In a second step, a plot-scale model, WOFOST6, and a regional-scale crop model, MCWLA7, were calibrated and validated, and applied to simulate wheat growth and yield variability from 1971-2012. Next, the estimated impacts of high temperature stress, cold damage, and drought stress on crop growth and productivity based on the statistical approaches, and on crop simulation models WOFOST and MCWLA were compared. Then, the impact mechanisms of climate extremes on crop growth and productivity in the WOFOST model and MCWLA model were identified, and subsequently, the various algorithm and impact functions were fitted against the long-term crop trial data. Finally, the impact mechanisms, algorithms and functions in WOFOST model and MCWLA model were improved to better simulate the impacts of climate variability and extremes, particularly high temperature stress, cold damage and drought stress for location-specific and large area climate impact assessments. Our studies provide a good example of how to improve, in parallel, the plot- and regional-scale models for simulating impacts of climate variability and extremes, as needed for

  17. Contribution of large-scale midlatitude disturbances to hourly precipitation extremes in the United States

    Science.gov (United States)

    Barbero, Renaud; Abatzoglou, John T.; Fowler, Hayley J.

    2018-02-01

    Midlatitude synoptic weather regimes account for a substantial portion of annual precipitation accumulation as well as multi-day precipitation extremes across parts of the United States (US). However, little attention has been devoted to understanding how synoptic-scale patterns contribute to hourly precipitation extremes. A majority of 1-h annual maximum precipitation (AMP) across the western US were found to be linked to two coherent midlatitude synoptic patterns: disturbances propagating along the jet stream, and cutoff upper-level lows. The influence of these two patterns on 1-h AMP varies geographically. Over 95% of 1-h AMP along the western coastal US were coincident with progressive midlatitude waves embedded within the jet stream, while over 30% of 1-h AMP across the interior western US were coincident with cutoff lows. Between 30-60% of 1-h AMP were coincident with the jet stream across the Ohio River Valley and southeastern US, whereas a a majority of 1-h AMP over the rest of central and eastern US were not found to be associated with either midlatitude synoptic features. Composite analyses for 1-h AMP days coincident to cutoff lows and jet stream show that an anomalous moisture flux and upper-level dynamics are responsible for initiating instability and setting up an environment conducive to 1-h AMP events. While hourly precipitation extremes are generally thought to be purely convective in nature, this study shows that large-scale dynamics and baroclinic disturbances may also contribute to precipitation extremes on sub-daily timescales.

  18. Using GRACE Satellite Gravimetry for Assessing Large-Scale Hydrologic Extremes

    Directory of Open Access Journals (Sweden)

    Alexander Y. Sun

    2017-12-01

    Full Text Available Global assessment of the spatiotemporal variability in terrestrial total water storage anomalies (TWSA in response to hydrologic extremes is critical for water resources management. Using TWSA derived from the gravity recovery and climate experiment (GRACE satellites, this study systematically assessed the skill of the TWSA-climatology (TC approach and breakpoint (BP detection method for identifying large-scale hydrologic extremes. The TC approach calculates standardized anomalies by using the mean and standard deviation of the GRACE TWSA corresponding to each month. In the BP detection method, the empirical mode decomposition (EMD is first applied to identify the mean return period of TWSA extremes, and then a statistical procedure is used to identify the actual occurrence times of abrupt changes (i.e., BPs in TWSA. Both detection methods were demonstrated on basin-averaged TWSA time series for the world’s 35 largest river basins. A nonlinear event coincidence analysis measure was applied to cross-examine abrupt changes detected by these methods with those detected by the Standardized Precipitation Index (SPI. Results show that our EMD-assisted BP procedure is a promising tool for identifying hydrologic extremes using GRACE TWSA data. Abrupt changes detected by the BP method coincide well with those of the SPI anomalies and with documented hydrologic extreme events. Event timings obtained by the TC method were ambiguous for a number of river basins studied, probably because the GRACE data length is too short to derive long-term climatology at this time. The BP approach demonstrates a robust wet-dry anomaly detection capability, which will be important for applications with the upcoming GRACE Follow-On mission.

  19. Proceedings of the meeting on large scale computer simulation research

    International Nuclear Information System (INIS)

    2004-04-01

    The meeting to summarize the collaboration activities for FY2003 on the Large Scale Computer Simulation Research was held January 15-16, 2004 at Theory and Computer Simulation Research Center, National Institute for Fusion Science. Recent simulation results, methodologies and other related topics were presented. (author)

  20. Extreme events in total ozone: Spatio-temporal analysis from local to global scale

    Science.gov (United States)

    Rieder, Harald E.; Staehelin, Johannes; Maeder, Jörg A.; Ribatet, Mathieu; di Rocco, Stefania; Jancso, Leonhardt M.; Peter, Thomas; Davison, Anthony C.

    2010-05-01

    dynamics (NAO, ENSO) on total ozone is a global feature in the northern mid-latitudes (Rieder et al., 2010c). In a next step frequency distributions of extreme events are analyzed on global scale (northern and southern mid-latitudes). A specific focus here is whether findings gained through analysis of long-term European ground based stations can be clearly identified as a global phenomenon. By showing results from these three types of studies an overview of extreme events in total ozone (and the dynamical and chemical features leading to those) will be presented from local to global scales. References: Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part I: Application of extreme value theory, to be submitted to ACPD. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part II: Fingerprints of atmospheric dynamics and chemistry and effects on mean values and long-term changes, to be submitted to ACPD. Rieder, H.E., Jancso, L., Staehelin, J., Maeder, J.A., Ribatet, Peter, T., and A.D., Davison (2010): Extreme events in total ozone over the northern mid-latitudes: A case study based on long-term data sets from 5 ground-based stations, in preparation. Staehelin, J., Renaud, A., Bader, J., McPeters, R., Viatte, P., Hoegger, B., Bugnion, V., Giroud, M., and Schill, H.: Total ozone series at Arosa (Switzerland): Homogenization and data comparison, J. Geophys. Res., 103(D5), 5827-5842, doi:10.1029/97JD02402, 1998a. Staehelin, J., Kegel, R., and Harris, N. R.: Trend analysis of the homogenized total ozone series of Arosa

  1. Extreme-Scale Stochastic Particle Tracing for Uncertain Unsteady Flow Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Hanqi; He, Wenbin; Seo, Sangmin; Shen, Han-Wei; Peterka, Tom

    2016-11-13

    We present an efficient and scalable solution to estimate uncertain transport behaviors using stochastic flow maps (SFM,) for visualizing and analyzing uncertain unsteady flows. SFM computation is extremely expensive because it requires many Monte Carlo runs to trace densely seeded particles in the flow. We alleviate the computational cost by decoupling the time dependencies in SFMs so that we can process adjacent time steps independently and then compose them together for longer time periods. Adaptive refinement is also used to reduce the number of runs for each location. We then parallelize over tasks—packets of particles in our design—to achieve high efficiency in MPI/thread hybrid programming. Such a task model also enables CPU/GPU coprocessing. We show the scalability on two supercomputers, Mira (up to 1M Blue Gene/Q cores) and Titan (up to 128K Opteron cores and 8K GPUs), that can trace billions of particles in seconds.

  2. Power-law scaling of extreme dynamics near higher-order exceptional points

    Science.gov (United States)

    Zhong, Q.; Christodoulides, D. N.; Khajavikhan, M.; Makris, K. G.; El-Ganainy, R.

    2018-02-01

    We investigate the extreme dynamics of non-Hermitian systems near higher-order exceptional points in photonic networks constructed using the bosonic algebra method. We show that strong power oscillations for certain initial conditions can occur as a result of the peculiar eigenspace geometry and its dimensionality collapse near these singularities. By using complementary numerical and analytical approaches, we show that, in the parity-time (PT ) phase near exceptional points, the logarithm of the maximum optical power amplification scales linearly with the order of the exceptional point. We focus in our discussion on photonic systems, but we note that our results apply to other physical systems as well.

  3. Understanding extreme sea levels for broad-scale coastal impact and adaptation analysis

    Science.gov (United States)

    Wahl, T.; Haigh, I. D.; Nicholls, R. J.; Arns, A.; Dangendorf, S.; Hinkel, J.; Slangen, A. B. A.

    2017-07-01

    One of the main consequences of mean sea level rise (SLR) on human settlements is an increase in flood risk due to an increase in the intensity and frequency of extreme sea levels (ESL). While substantial research efforts are directed towards quantifying projections and uncertainties of future global and regional SLR, corresponding uncertainties in contemporary ESL have not been assessed and projections are limited. Here we quantify, for the first time at global scale, the uncertainties in present-day ESL estimates, which have by default been ignored in broad-scale sea-level rise impact assessments to date. ESL uncertainties exceed those from global SLR projections and, assuming that we meet the Paris agreement goals, the projected SLR itself by the end of the century in many regions. Both uncertainties in SLR projections and ESL estimates need to be understood and combined to fully assess potential impacts and adaptation needs.

  4. Final Report Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    Energy Technology Data Exchange (ETDEWEB)

    O' Leary, Patrick [Kitware, Inc., Clifton Park, NY (United States)

    2017-09-13

    The primary challenge motivating this project is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who can perform analysis only on a small fraction of the data they calculate, resulting in the substantial likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, which is known as in situ processing. The idea in situ processing was not new at the time of the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by Department of Energy (DOE) science projects. Our objective was to produce and enable the use of production-quality in situ methods and infrastructure, at scale, on DOE high-performance computing (HPC) facilities, though we expected to have an impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve this objective, we engaged in software technology research and development (R&D), in close partnerships with DOE science code teams, to produce software technologies that were shown to run efficiently at scale on DOE HPC platforms.

  5. Measurement Properties of the Lower Extremity Functional Scale: A Systematic Review.

    Science.gov (United States)

    Mehta, Saurabh P; Fulton, Allison; Quach, Cedric; Thistle, Megan; Toledo, Cesar; Evans, Neil A

    2016-03-01

    Systematic review of measurement properties. Many primary studies have examined the measurement properties, such as reliability, validity, and sensitivity to change, of the Lower Extremity Functional Scale (LEFS) in different clinical populations. A systematic review summarizing these properties for the LEFS may provide an important resource. To locate and synthesize evidence on the measurement properties of the LEFS and to discuss the clinical implications of the evidence. A literature search was conducted in 4 databases (PubMed, MEDLINE, Embase, and CINAHL), using predefined search terms. Two reviewers performed a critical appraisal of the included studies using a standardized assessment form. A total of 27 studies were included in the review, of which 18 achieved a very good to excellent methodological quality level. The LEFS scores demonstrated excellent test-retest reliability (intraclass correlation coefficients ranging between 0.85 and 0.99) and demonstrated the expected relationships with measures assessing similar constructs (Pearson correlation coefficient values of greater than 0.7). The responsiveness of the LEFS scores was excellent, as suggested by consistently high effect sizes (greater than 0.8) in patients with different lower extremity conditions. Minimal detectable change at the 90% confidence level (MDC90) for the LEFS scores varied between 8.1 and 15.3 across different reassessment intervals in a wide range of patient populations. The pooled estimate of the MDC90 was 6 points and the minimal clinically important difference was 9 points in patients with lower extremity musculoskeletal conditions, which are indicative of true change and clinically meaningful change, respectively. The results of this review support the reliability, validity, and responsiveness of the LEFS scores for assessing functional impairment in a wide array of patient groups with lower extremity musculoskeletal conditions.

  6. Development of a small-scale computer cluster

    Science.gov (United States)

    Wilhelm, Jay; Smith, Justin T.; Smith, James E.

    2008-04-01

    An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.

  7. AN AUTOMATIC DETECTION METHOD FOR EXTREME-ULTRAVIOLET DIMMINGS ASSOCIATED WITH SMALL-SCALE ERUPTION

    Energy Technology Data Exchange (ETDEWEB)

    Alipour, N.; Safari, H. [Department of Physics, University of Zanjan, P.O. Box 45195-313, Zanjan (Iran, Islamic Republic of); Innes, D. E. [Max-Planck Institut fuer Sonnensystemforschung, 37191 Katlenburg-Lindau (Germany)

    2012-02-10

    Small-scale extreme-ultraviolet (EUV) dimming often surrounds sites of energy release in the quiet Sun. This paper describes a method for the automatic detection of these small-scale EUV dimmings using a feature-based classifier. The method is demonstrated using sequences of 171 Angstrom-Sign images taken by the STEREO/Extreme UltraViolet Imager (EUVI) on 2007 June 13 and by Solar Dynamics Observatory/Atmospheric Imaging Assembly on 2010 August 27. The feature identification relies on recognizing structure in sequences of space-time 171 Angstrom-Sign images using the Zernike moments of the images. The Zernike moments space-time slices with events and non-events are distinctive enough to be separated using a support vector machine (SVM) classifier. The SVM is trained using 150 events and 700 non-event space-time slices. We find a total of 1217 events in the EUVI images and 2064 events in the AIA images on the days studied. Most of the events are found between latitudes -35 Degree-Sign and +35 Degree-Sign . The sizes and expansion speeds of central dimming regions are extracted using a region grow algorithm. The histograms of the sizes in both EUVI and AIA follow a steep power law with slope of about -5. The AIA slope extends to smaller sizes before turning over. The mean velocity of 1325 dimming regions seen by AIA is found to be about 14 km s{sup -1}.

  8. Development of small scale cluster computer for numerical analysis

    Science.gov (United States)

    Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.

    2017-09-01

    In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.

  9. Modelling of spatio-temporal precipitation relevant for urban hydrology with focus on scales, extremes and climate change

    DEFF Research Database (Denmark)

    Sørup, Hjalte Jomo Danielsen

    -correlation lengths for sub-daily extreme precipitation besides having too low intensities. Especially the wrong spatial correlation structure is disturbing from an urban hydrological point of view as short-term extremes will cover too much ground if derived directly from bias corrected regional climate model output...... of precipitation are compared and used to rank climate models with respect to performance metrics. The four different observational data sets themselves are compared at daily temporal scale with respect to climate indices for mean and extreme precipitation. Data density seems to be a crucial parameter for good...... happening in summer and most of the daily extremes in fall. This behaviour is in good accordance with reality where short term extremes originate in convective precipitation cells that occur when it is very warm and longer term extremes originate in frontal systems that dominate the fall and winter seasons...

  10. Multi-scale analysis of lung computed tomography images

    CERN Document Server

    Gori, I; Fantacci, M E; Preite Martinez, A; Retico, A; De Mitri, I; Donadio, S; Fulcheri, C

    2007-01-01

    A computer-aided detection (CAD) system for the identification of lung internal nodules in low-dose multi-detector helical Computed Tomography (CT) images was developed in the framework of the MAGIC-5 project. The three modules of our lung CAD system, a segmentation algorithm for lung internal region identification, a multi-scale dot-enhancement filter for nodule candidate selection and a multi-scale neural technique for false positive finding reduction, are described. The results obtained on a dataset of low-dose and thin-slice CT scans are shown in terms of free response receiver operating characteristic (FROC) curves and discussed.

  11. Analyzing extreme sea levels for broad-scale impact and adaptation studies

    Science.gov (United States)

    Wahl, T.; Haigh, I. D.; Nicholls, R. J.; Arns, A.; Dangendorf, S.; Hinkel, J.; Slangen, A.

    2017-12-01

    Coastal impact and adaptation assessments require detailed knowledge on extreme sea levels (ESL), because increasing damage due to extreme events is one of the major consequences of sea-level rise (SLR) and climate change. Over the last few decades, substantial research efforts have been directed towards improved understanding of past and future SLR; different scenarios were developed with process-based or semi-empirical models and used for coastal impact studies at various temporal and spatial scales to guide coastal management and adaptation efforts. Uncertainties in future SLR are typically accounted for by analyzing the impacts associated with a range of scenarios and model ensembles. ESL distributions are then displaced vertically according to the SLR scenarios under the inherent assumption that we have perfect knowledge on the statistics of extremes. However, there is still a limited understanding of present-day ESL which is largely ignored in most impact and adaptation analyses. The two key uncertainties stem from: (1) numerical models that are used to generate long time series of storm surge water levels, and (2) statistical models used for determining present-day ESL exceedance probabilities. There is no universally accepted approach to obtain such values for broad-scale flood risk assessments and while substantial research has explored SLR uncertainties, we quantify, for the first time globally, key uncertainties in ESL estimates. We find that contemporary ESL uncertainties exceed those from SLR projections and, assuming that we meet the Paris agreement, the projected SLR itself by the end of the century. Our results highlight the necessity to further improve our understanding of uncertainties in ESL estimates through (1) continued improvement of numerical and statistical models to simulate and analyze coastal water levels and (2) exploit the rich observational database and continue data archeology to obtain longer time series and remove model bias

  12. Extreme Temperature Regimes during the Cool Season and their Associated Large-Scale Circulations

    Science.gov (United States)

    Xie, Z.

    2015-12-01

    In the cool season (November-March), extreme temperature events (ETEs) always hit the continental United States (US) and provide significant societal impacts. According to the anomalous amplitudes of the surface air temperature (SAT), there are two typical types of ETEs, e.g. cold waves (CWs) and warm waves (WWs). This study used cluster analysis to categorize both CWs and WWs into four distinct regimes respectively and investigated their associated large-scale circulations on intra-seasonal time scale. Most of the CW regimes have large areal impact over the continental US. However, the distribution of cold SAT anomalies varies apparently in four regimes. In the sea level, the four CW regimes are characterized by anomalous high pressure over North America (near and to west of cold anomaly) with different extension and orientation. As a result, anomalous northerlies along east flank of anomalous high pressure convey cold air into the continental US. To the middle troposphere, the leading two groups feature large-scale and zonally-elongated circulation anomaly pattern, while the other two regimes exhibit synoptic wavetrain pattern with meridionally elongated features. As for the WW regimes, there are some patterns symmetry and anti-symmetry with respect to CW regimes. The WW regimes are characterized by anomalous low pressure and southerlies wind over North America. The first and fourth groups are affected by remote forcing emanating from North Pacific, while the others appear mainly locally forced.

  13. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  14. Reliability in Warehouse-Scale Computing: Why Low Latency Matters

    DEFF Research Database (Denmark)

    Nannarelli, Alberto

    2015-01-01

    , the limiting factor of these warehouse-scale data centers is the power dissipation. Power is dissipated not only in the computation itself, but also in heat removal (fans, air conditioning, etc.) to keep the temperature of the devices within the operating ranges. The need to keep the temperature low within......Warehouse sized buildings are nowadays hosting several types of large computing systems: from supercomputers to large clusters of servers to provide the infrastructure to the cloud. Although the main target, especially for high-performance computing, is still to achieve high throughput...

  15. Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-07-24

    The primary challenge motivating this team’s work is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who are able to perform analysis only on a small fraction of the data they compute, resulting in the very real likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, an approach that is known as in situ processing. The idea in situ processing was not new at the time of the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by DOE science projects. In large, our objective was produce and enable use of production-quality in situ methods and infrastructure, at scale, on DOE HPC facilities, though we expected to have impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve that objective, we assembled a unique team of researchers consisting of representatives from DOE national laboratories, academia, and industry, and engaged in software technology R&D, as well as engaged in close partnerships with DOE science code teams, to produce software technologies that were shown to run effectively at scale on DOE HPC platforms.

  16. Large scale computing in theoretical physics: Example QCD

    International Nuclear Information System (INIS)

    Schilling, K.

    1986-01-01

    The limitations of the classical mathematical analysis of Newton and Leibniz appear to be more and more overcome by the power of modern computers. Large scale computing techniques - which resemble closely the methods used in simulations within statistical mechanics - allow to treat nonlinear systems with many degrees of freedom such as field theories in nonperturbative situations, where analytical methods do fail. The computation of the hadron spectrum within the framework of lattice QCD sets a demanding goal for the application of supercomputers in basic science. It requires both big computer capacities and clever algorithms to fight all the numerical evils that one encounters in the Euclidean world. The talk will attempt to describe both the computer aspects and the present state of the art of spectrum calculations within lattice QCD. (orig.)

  17. Topology-oblivious optimization of MPI broadcast algorithms on extreme-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2015-11-01

    © 2015 Elsevier B.V. All rights reserved. Significant research has been conducted in collective communication operations, in particular in MPI broadcast, on distributed memory platforms. Most of the research efforts aim to optimize the collective operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a simple but general approach to optimization of the legacy MPI broadcast algorithms, which are widely used in MPICH and Open MPI. The proposed optimization technique is designed to address the challenge of extreme scale of future HPC platforms. It is based on hierarchical transformation of the traditionally flat logical arrangement of communicating processors. Theoretical analysis and experimental results on IBM BlueGene/P and a cluster of the Grid\\'5000 platform are presented.

  18. Scaling predictive modeling in drug development with cloud computing.

    Science.gov (United States)

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  19. Large-Scale Atmospheric Circulation Patterns Associated with Temperature Extremes as a Basis for Model Evaluation: Methodological Overview and Results

    Science.gov (United States)

    Loikith, P. C.; Broccoli, A. J.; Waliser, D. E.; Lintner, B. R.; Neelin, J. D.

    2015-12-01

    Anomalous large-scale circulation patterns often play a key role in the occurrence of temperature extremes. For example, large-scale circulation can drive horizontal temperature advection or influence local processes that lead to extreme temperatures, such as by inhibiting moderating sea breezes, promoting downslope adiabatic warming, and affecting the development of cloud cover. Additionally, large-scale circulation can influence the shape of temperature distribution tails, with important implications for the magnitude of future changes in extremes. As a result of the prominent role these patterns play in the occurrence and character of extremes, the way in which temperature extremes change in the future will be highly influenced by if and how these patterns change. It is therefore critical to identify and understand the key patterns associated with extremes at local to regional scales in the current climate and to use this foundation as a target for climate model validation. This presentation provides an overview of recent and ongoing work aimed at developing and applying novel approaches to identifying and describing the large-scale circulation patterns associated with temperature extremes in observations and using this foundation to evaluate state-of-the-art global and regional climate models. Emphasis is given to anomalies in sea level pressure and 500 hPa geopotential height over North America using several methods to identify circulation patterns, including self-organizing maps and composite analysis. Overall, evaluation results suggest that models are able to reproduce observed patterns associated with temperature extremes with reasonable fidelity in many cases. Model skill is often highest when and where synoptic-scale processes are the dominant mechanisms for extremes, and lower where sub-grid scale processes (such as those related to topography) are important. Where model skill in reproducing these patterns is high, it can be inferred that extremes are

  20. Toward Improving Predictability of Extreme Hydrometeorological Events: the Use of Multi-scale Climate Modeling in the Northern High Plains

    Science.gov (United States)

    Munoz-Arriola, F.; Torres-Alavez, J.; Mohamad Abadi, A.; Walko, R. L.

    2014-12-01

    Our goal is to investigate possible sources of predictability of hydrometeorological extreme events in the Northern High Plains. Hydrometeorological extreme events are considered the most costly natural phenomena. Water deficits and surpluses highlight how the water-climate interdependence becomes crucial in areas where single activities drive economies such as Agriculture in the NHP. Nonetheless we recognize the Water-Climate interdependence and the regulatory role that human activities play, we still grapple to identify what sources of predictability could be added to flood and drought forecasts. To identify the benefit of multi-scale climate modeling and the role of initial conditions on flood and drought predictability on the NHP, we use the Ocean Land Atmospheric Model (OLAM). OLAM is characterized by a dynamic core with a global geodesic grid with hexagonal (and variably refined) mesh cells and a finite volume discretization of the full compressible Navier Stokes equations, a cut-grid cell method for topography (that reduces error in computational gradient computation and anomalous vertical dispersion). Our hypothesis is that wet conditions will drive OLAM's simulations of precipitation to wetter conditions affecting both flood forecast and drought forecast. To test this hypothesis we simulate precipitation during identified historical flood events followed by drought events in the NHP (i.e. 2011-2012 years). We initialized OLAM with CFS-data 1-10 days previous to a flooding event (as initial conditions) to explore (1) short-term and high-resolution and (2) long-term and coarse-resolution simulations of flood and drought events, respectively. While floods are assessed during a maximum of 15-days refined-mesh simulations, drought is evaluated during the following 15 months. Simulated precipitation will be compared with the Sub-continental Observation Dataset, a gridded 1/16th degree resolution data obtained from climatological stations in Canada, US, and

  1. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    Science.gov (United States)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  2. Establishing the Turkish version of the SIGAM mobility scale, and determining its validity and reliability in lower extremity amputees.

    Science.gov (United States)

    Yilmaz, Hülya; Gafuroğlu, Ümit; Ryall, Nicola; Yüksel, Selcen

    2018-02-01

    The aim of this study is to adapt the Special Interest Group in Amputee Medicine (SIGAM) mobility scale to Turkish, and to test its validity and reliability in lower extremity amputees. Adaptation of the scale into Turkish was performed by following the steps in American Association of Orthopedic Surgeons (AAOS) guideline. Turkish version of the scale was tested twice on 109 patients who had lower extremity amputations, at hours 0 and 72. The reliability of the Turkish version was tested for internal consistency and test-retest reliability. Structural validity was tested using the "scale validity" method. For this purpose, the scores of the Short Form-36 (SF-36), Functional Ambulation Scale (FAS), Get Up and Go Test, and Satisfaction with the Prosthesis Questionnaire (SATPRO) were calculated, and analyzed using Spearman's correlation test. Cronbach's alpha coefficient was 0.67 for the Turkish version of the SIGAM mobility scale. Cohen's kappa coefficients were between 0.224 and 0.999. Repeatability according to the results of the SIGAM mobility scale (grades A-F) was 0.822. We found significant and strong positive correlations of the SIGAM mobility scale results with the FAS, Get Up and Go Test, SATPRO, and all of the SF-36 subscales. In our study, the Turkish version of the SIGAM mobility scale was found as a reliable, valid, and easy to use scale in everyday practice for measuring mobility in lower extremity amputees. Implications for Rehabilitation Amputation is the surgical removal of a severely injured and nonfunctional extremity, at a level of one or more bones proximal to the body. Loss of a lower extremity is one of the most important conditions that cause functional disability. The Special Interest Group in Amputee Medicine (SIGAM) mobility scale contains 21 questions that evaluate the mobility of lower extremity amputees. Lack of a specific Turkish scale that evaluates rehabilitation results and mobility of lower extremity amputees, and determines their

  3. 3D artefact for concurrent scale calibration in Computed Tomography

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; De Chiffre, Leonardo

    2016-01-01

    A novel artefact for calibration of the scale in 3D X-ray Computed Tomography (CT) is presented. The artefact comprises a carbon fibre tubular structure on which a number of reference ruby spheres are glued. The artefact is positioned and scanned together with the workpiece inside the CT scanner...

  4. Detecting Silent Data Corruption for Extreme-Scale Applications through Data Mining

    Energy Technology Data Exchange (ETDEWEB)

    Bautista-Gomez, Leonardo [Argonne National Lab. (ANL), Argonne, IL (United States); Cappello, Franck [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-01-16

    Supercomputers allow scientists to study natural phenomena by means of computer simulations. Next-generation machines are expected to have more components and, at the same time, consume several times less energy per operation. These trends are pushing supercomputer construction to the limits of miniaturization and energy-saving strategies. Consequently, the number of soft errors is expected to increase dramatically in the coming years. While mechanisms are in place to correct or at least detect some soft errors, a significant percentage of those errors pass unnoticed by the hardware. Such silent errors are extremely damaging because they can make applications silently produce wrong results. In this work we propose a technique that leverages certain properties of high-performance computing applications in order to detect silent errors at the application level. Our technique detects corruption solely based on the behavior of the application datasets and is completely application-agnostic. We propose multiple corruption detectors, and we couple them to work together in a fashion transparent to the user. We demonstrate that this strategy can detect the majority of the corruptions, while incurring negligible overhead. We show that with the help of these detectors, applications can have up to 80% of coverage against data corruption.

  5. Effect of Variable Spatial Scales on USLE-GIS Computations

    Science.gov (United States)

    Patil, R. J.; Sharma, S. K.

    2017-12-01

    Use of appropriate spatial scale is very important in Universal Soil Loss Equation (USLE) based spatially distributed soil erosion modelling. This study aimed at assessment of annual rates of soil erosion at different spatial scales/grid sizes and analysing how changes in spatial scales affect USLE-GIS computations using simulation and statistical variabilities. Efforts have been made in this study to recommend an optimum spatial scale for further USLE-GIS computations for management and planning in the study area. The present research study was conducted in Shakkar River watershed, situated in Narsinghpur and Chhindwara districts of Madhya Pradesh, India. Remote Sensing and GIS techniques were integrated with Universal Soil Loss Equation (USLE) to predict spatial distribution of soil erosion in the study area at four different spatial scales viz; 30 m, 50 m, 100 m, and 200 m. Rainfall data, soil map, digital elevation model (DEM) and an executable C++ program, and satellite image of the area were used for preparation of the thematic maps for various USLE factors. Annual rates of soil erosion were estimated for 15 years (1992 to 2006) at four different grid sizes. The statistical analysis of four estimated datasets showed that sediment loss dataset at 30 m spatial scale has a minimum standard deviation (2.16), variance (4.68), percent deviation from observed values (2.68 - 18.91 %), and highest coefficient of determination (R2 = 0.874) among all the four datasets. Thus, it is recommended to adopt this spatial scale for USLE-GIS computations in the study area due to its minimum statistical variability and better agreement with the observed sediment loss data. This study also indicates large scope for use of finer spatial scales in spatially distributed soil erosion modelling.

  6. Parallel Computational Fluid Dynamics 2007 : Implementations and Experiences on Large Scale and Grid Computing

    CERN Document Server

    2009-01-01

    At the 19th Annual Conference on Parallel Computational Fluid Dynamics held in Antalya, Turkey, in May 2007, the most recent developments and implementations of large-scale and grid computing were presented. This book, comprised of the invited and selected papers of this conference, details those advances, which are of particular interest to CFD and CFD-related communities. It also offers the results related to applications of various scientific and engineering problems involving flows and flow-related topics. Intended for CFD researchers and graduate students, this book is a state-of-the-art presentation of the relevant methodology and implementation techniques of large-scale computing.

  7. Rotational profile of the lower extremity in achondroplasia: computed tomographic examination of 25 patients

    Energy Technology Data Exchange (ETDEWEB)

    Song, Hae-Ryong; Suh, Seung-Woo [Korea University Guro Hospital, Department of Orthopaedic Surgery, Rare Diseases Institute, Seoul (Korea); Choonia, Abi-Turab [Laud Clinic, Department of Orthopaedic Surgery, Mumbai (India); Hong, Suk Joo; Cha, In Ho [Korea University Guro Hospital, Department of Radiology, Seoul (Korea); Lee, Seok-Hyun [Dongguk University Ilsan Buddist Hospital, Department of Orthopaedic Surgery, Goyang (Korea); Park, Jong-Tae [Korea University Ansan Hospital, Department of Occupational and Enviornmental Medicine, Ansan (Korea)

    2006-12-15

    To evaluate lower-extremity rotational abnormalities in subjects with achondroplasia using computed tomography (CT) scans. CT scans were performed in 25 subjects with achondroplasia (13 skeletally immature, mean age 8.7 years; 12 skeletally mature, mean age 17.6 years). In a total of 50 bilateral limbs, CT images were used to measure the angles of acetabular anteversion, femoral anteversion, and tibial external torion. Measurement was performed by three examiners and then repeated by one examiner. Inter- and intraobserver agreements were analyzed, and results were compared with previously reported normal values. Mean values for skeletally immature and skeletally mature subjects were 13.6{+-}7.5 and 21.5{+-}6.4 respectively for acetabular anteversion, 27.1{+-}20.8 and 30.5{+-}20.1 for femoral torsion, and 21.6{+-}10.6 and 22.5{+-}10.8 for tibial torsion. Intra- and interobserver agreements were good to excellent. Acetabular anteversion and femoral anteversion in skeletally mature subjects were greater than normal values in previous studies. Both skeletally immature and mature subjects with achondroplasia had decreased tibial torsion compared to normal skeletally immature and mature subjects. Lower-extremity rotational abnormalities in subjects with achondroplasia include decreased tibial external torsion in both skeletally immature and mature subjects, as well as increased femoral and acetabular anteversion in skeletally mature subjects. (orig.)

  8. Rotational profile of the lower extremity in achondroplasia: computed tomographic examination of 25 patients

    International Nuclear Information System (INIS)

    Song, Hae-Ryong; Suh, Seung-Woo; Choonia, Abi-Turab; Hong, Suk Joo; Cha, In Ho; Lee, Seok-Hyun; Park, Jong-Tae

    2006-01-01

    To evaluate lower-extremity rotational abnormalities in subjects with achondroplasia using computed tomography (CT) scans. CT scans were performed in 25 subjects with achondroplasia (13 skeletally immature, mean age 8.7 years; 12 skeletally mature, mean age 17.6 years). In a total of 50 bilateral limbs, CT images were used to measure the angles of acetabular anteversion, femoral anteversion, and tibial external torion. Measurement was performed by three examiners and then repeated by one examiner. Inter- and intraobserver agreements were analyzed, and results were compared with previously reported normal values. Mean values for skeletally immature and skeletally mature subjects were 13.6±7.5 and 21.5±6.4 respectively for acetabular anteversion, 27.1±20.8 and 30.5±20.1 for femoral torsion, and 21.6±10.6 and 22.5±10.8 for tibial torsion. Intra- and interobserver agreements were good to excellent. Acetabular anteversion and femoral anteversion in skeletally mature subjects were greater than normal values in previous studies. Both skeletally immature and mature subjects with achondroplasia had decreased tibial torsion compared to normal skeletally immature and mature subjects. Lower-extremity rotational abnormalities in subjects with achondroplasia include decreased tibial external torsion in both skeletally immature and mature subjects, as well as increased femoral and acetabular anteversion in skeletally mature subjects. (orig.)

  9. The structure and large-scale organization of extreme cold waves over the conterminous United States

    Science.gov (United States)

    Xie, Zuowei; Black, Robert X.; Deng, Yi

    2017-12-01

    Extreme cold waves (ECWs) occurring over the conterminous United States (US) are studied through a systematic identification and documentation of their local synoptic structures, associated large-scale meteorological patterns (LMPs), and forcing mechanisms external to the US. Focusing on the boreal cool season (November-March) for 1950‒2005, a hierarchical cluster analysis identifies three ECW patterns, respectively characterized by cold surface air temperature anomalies over the upper midwest (UM), northwestern (NW), and southeastern (SE) US. Locally, ECWs are synoptically organized by anomalous high pressure and northerly flow. At larger scales, the UM LMP features a zonal dipole in the mid-tropospheric height field over North America, while the NW and SE LMPs each include a zonal wave train extending from the North Pacific across North America into the North Atlantic. The Community Climate System Model version 4 (CCSM4) in general simulates the three ECW patterns quite well and successfully reproduces the observed enhancements in the frequency of their associated LMPs. La Niña and the cool phase of the Pacific Decadal Oscillation (PDO) favor the occurrence of NW ECWs, while the warm PDO phase, low Arctic sea ice extent and high Eurasian snow cover extent (SCE) are associated with elevated SE-ECW frequency. Additionally, high Eurasian SCE is linked to increases in the occurrence likelihood of UM ECWs.

  10. Automatic detection of ischemic stroke based on scaling exponent electroencephalogram using extreme learning machine

    Science.gov (United States)

    Adhi, H. A.; Wijaya, S. K.; Prawito; Badri, C.; Rezal, M.

    2017-03-01

    Stroke is one of cerebrovascular diseases caused by the obstruction of blood flow to the brain. Stroke becomes the leading cause of death in Indonesia and the second in the world. Stroke also causes of the disability. Ischemic stroke accounts for most of all stroke cases. Obstruction of blood flow can cause tissue damage which results the electrical changes in the brain that can be observed through the electroencephalogram (EEG). In this study, we presented the results of automatic detection of ischemic stroke and normal subjects based on the scaling exponent EEG obtained through detrended fluctuation analysis (DFA) using extreme learning machine (ELM) as the classifier. The signal processing was performed with 18 channels of EEG in the range of 0-30 Hz. Scaling exponents of the subjects were used as the input for ELM to classify the ischemic stroke. The performance of detection was observed by the value of accuracy, sensitivity and specificity. The result showed, performance of the proposed method to classify the ischemic stroke was 84 % for accuracy, 82 % for sensitivity and 87 % for specificity with 120 hidden neurons and sine as the activation function of ELM.

  11. Kinetic turbulence simulations at extreme scale on leadership-class systems

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Bei [Princeton Univ., Princeton, NJ (United States); Ethier, Stephane [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Tang, William [Princeton Univ., Princeton, NJ (United States); Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Williams, Timothy [Argonne National Lab. (ANL), Argonne, IL (United States); Ibrahim, Khaled Z. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Madduri, Kamesh [The Pennsylvania State Univ., University Park, PA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2013-01-01

    Reliable predictive simulation capability addressing confinement properties in magnetically confined fusion plasmas is critically-important for ITER, a 20 billion dollar international burning plasma device under construction in France. The complex study of kinetic turbulence, which can severely limit the energy confinement and impact the economic viability of fusion systems, requires simulations at extreme scale for such an unprecedented device size. Our newly optimized, global, ab initio particle-in-cell code solving the nonlinear equations underlying gyrokinetic theory achieves excellent performance with respect to "time to solution" at the full capacity of the IBM Blue Gene/Q on 786,432 cores of Mira at ALCF and recently of the 1,572,864 cores of Sequoia at LLNL. Recent multithreading and domain decomposition optimizations in the new GTC-P code represent critically important software advances for modern, low memory per core systems by enabling routine simulations at unprecedented size (130 million grid points ITER-scale) and resolution (65 billion particles).

  12. Communicating Climate Uncertainties: Challenges and Opportunities Related to Spatial Scales, Extreme Events, and the Warming 'Hiatus'

    Science.gov (United States)

    Casola, J. H.; Huber, D.

    2013-12-01

    Many media, academic, government, and advocacy organizations have achieved sophistication in developing effective messages based on scientific information, and can quickly translate salient aspects of emerging climate research and evolving observations. However, there are several ways in which valid messages can be misconstrued by decision makers, leading them to inaccurate conclusions about the risks associated with climate impacts. Three cases will be discussed: 1) Issues of spatial scale in interpreting climate observations: Local climate observations may contradict summary statements about the effects of climate change on larger regional or global spatial scales. Effectively addressing these differences often requires communicators to understand local and regional climate drivers, and the distinction between a 'signal' associated with climate change and local climate 'noise.' Hydrological statistics in Missouri and California are shown to illustrate this case. 2) Issues of complexity related to extreme events: Climate change is typically invoked following a wide range of damaging meteorological events (e.g., heat waves, landfalling hurricanes, tornadoes), regardless of the strength of the relationship between anthropogenic climate change and the frequency or severity of that type of event. Examples are drawn from media coverage of several recent events, contrasting useful and potentially confusing word choices and frames. 3) Issues revolving around climate sensitivity: The so-called 'pause' or 'hiatus' in global warming has reverberated strongly through political and business discussions of climate change. Addressing the recent slowdown in warming yields an important opportunity to raise climate literacy in these communities. Attempts to use recent observations as a wedge between climate 'believers' and 'deniers' is likely to be counterproductive. Examples are drawn from Congressional testimony and media stories. All three cases illustrate ways that decision

  13. Challenges in scaling NLO generators to leadership computers

    Science.gov (United States)

    Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.

    2017-10-01

    Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.

  14. Large scale particle simulations in a virtual memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Million, R.; Wagner, J.S.; Tajima, T.

    1983-01-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceeds the computer core size. The required address space is automatically mapped onto slow disc memory the the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Assesses to slow memory significantly reduce the excecution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time. (orig.)

  15. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  16. HPC Colony II: FAST_OS II: Operating Systems and Runtime Systems at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Moreira, Jose [IBM, Armonk, NY (United States)

    2013-11-13

    HPC Colony II has been a 36-month project focused on providing portable performance for leadership class machines—a task made difficult by the emerging variety of more complex computer architectures. The project attempts to move the burden of portable performance to adaptive system software, thereby allowing domain scientists to concentrate on their field rather than the fine details of a new leadership class machine. To accomplish our goals, we focused on adding intelligence into the system software stack. Our revised components include: new techniques to address OS jitter; new techniques to dynamically address load imbalances; new techniques to map resources according to architectural subtleties and application dynamic behavior; new techniques to dramatically improve the performance of checkpoint-restart; and new techniques to address membership service issues at scale.

  17. Computational biology in the cloud: methods and new insights from computing at scale.

    Science.gov (United States)

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.

  18. Computational intelligence-based optimization of maximally stable extremal region segmentation for object detection

    Science.gov (United States)

    Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.

    2017-05-01

    Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.

  19. Large Scale Computing and Storage Requirements for High Energy Physics

    International Nuclear Information System (INIS)

    Gerber, Richard A.; Wasserman, Harvey

    2010-01-01

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  20. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  1. Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud

    Directory of Open Access Journals (Sweden)

    A. Paulin Florence

    2016-01-01

    Full Text Available Cloud computing is a new technology which supports resource sharing on a “Pay as you go” basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption.

  2. Large-scale parallel genome assembler over cloud computing environment.

    Science.gov (United States)

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  3. Identification of discrete vascular lesions in the extremities using post-mortem computed tomography angiography – Case reports

    NARCIS (Netherlands)

    Haakma, Wieke; Rohde, Marianne; Uhrenholt, Lars; Pedersen, Michael; Boel, Lene Warner Thorup

    2017-01-01

    In this case report, we introduced post-mortem computed tomography angiography (PMCTA) in three cases suffering from vascular lesions in the upper extremities. In each subject, the third part of the axillary arteries and veins were used to catheterize the arms. The vessels were filled with a barium

  4. The nonstationary impact of local temperature changes and ENSO on extreme precipitation at the global scale

    Science.gov (United States)

    Sun, Qiaohong; Miao, Chiyuan; Qiao, Yuanyuan; Duan, Qingyun

    2017-12-01

    The El Niño-Southern Oscillation (ENSO) and local temperature are important drivers of extreme precipitation. Understanding the impact of ENSO and temperature on the risk of extreme precipitation over global land will provide a foundation for risk assessment and climate-adaptive design of infrastructure in a changing climate. In this study, nonstationary generalized extreme value distributions were used to model extreme precipitation over global land for the period 1979-2015, with ENSO indicator and temperature as covariates. Risk factors were estimated to quantify the contrast between the influence of different ENSO phases and temperature. The results show that extreme precipitation is dominated by ENSO over 22% of global land and by temperature over 26% of global land. With a warming climate, the risk of high-intensity daily extreme precipitation increases at high latitudes but decreases in tropical regions. For ENSO, large parts of North America, southern South America, and southeastern and northeastern China are shown to suffer greater risk in El Niño years, with more than double the chance of intense extreme precipitation in El Niño years compared with La Niña years. Moreover, regions with more intense precipitation are more sensitive to ENSO. Global climate models were used to investigate the changing relationship between extreme precipitation and the covariates. The risk of extreme, high-intensity precipitation increases across high latitudes of the Northern Hemisphere but decreases in middle and lower latitudes under a warming climate scenario, and will likely trigger increases in severe flooding and droughts across the globe. However, there is some uncertainties associated with the influence of ENSO on predictions of future extreme precipitation, with the spatial extent and risk varying among the different models.

  5. The scaling of population persistence with carrying capacity does not asymptote in populations of a fish experiencing extreme climate variability.

    Science.gov (United States)

    White, Richard S A; Wintle, Brendan A; McHugh, Peter A; Booker, Douglas J; McIntosh, Angus R

    2017-06-14

    Despite growing concerns regarding increasing frequency of extreme climate events and declining population sizes, the influence of environmental stochasticity on the relationship between population carrying capacity and time-to-extinction has received little empirical attention. While time-to-extinction increases exponentially with carrying capacity in constant environments, theoretical models suggest increasing environmental stochasticity causes asymptotic scaling, thus making minimum viable carrying capacity vastly uncertain in variable environments. Using empirical estimates of environmental stochasticity in fish metapopulations, we showed that increasing environmental stochasticity resulting from extreme droughts was insufficient to create asymptotic scaling of time-to-extinction with carrying capacity in local populations as predicted by theory. Local time-to-extinction increased with carrying capacity due to declining sensitivity to demographic stochasticity, and the slope of this relationship declined significantly as environmental stochasticity increased. However, recent 1 in 25 yr extreme droughts were insufficient to extirpate populations with large carrying capacity. Consequently, large populations may be more resilient to environmental stochasticity than previously thought. The lack of carrying capacity-related asymptotes in persistence under extreme climate variability reveals how small populations affected by habitat loss or overharvesting, may be disproportionately threatened by increases in extreme climate events with global warming. © 2017 The Author(s).

  6. Neural Computations in a Dynamical System with Multiple Time Scales.

    Science.gov (United States)

    Mi, Yuanyuan; Lin, Xiaohan; Wu, Si

    2016-01-01

    Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions.

  7. Large Scale Computing for the Modelling of Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon

    organization of the brain in continuously increasing resolution. From these images, networks of structural and functional connectivity can be constructed. Bayesian stochastic block modelling provides a prominent data-driven approach for uncovering the latent organization, by clustering the networks into groups...... of neurons. Relying on Markov Chain Monte Carlo (MCMC) simulations as the workhorse in Bayesian inference however poses significant computational challenges, especially when modelling networks at the scale and complexity supported by high-resolution whole-brain MRI. In this thesis, we present how to overcome...... these computational limitations and apply Bayesian stochastic block models for un-supervised data-driven clustering of whole-brain connectivity in full image resolution. We implement high-performance software that allows us to efficiently apply stochastic blockmodelling with MCMC sampling on large complex networks...

  8. Assessing future climatic changes of rainfall extremes at small spatio-temporal scales

    DEFF Research Database (Denmark)

    Gregersen, Ida Bülow; Sørup, Hjalte Jomo Danielsen; Madsen, Henrik

    2013-01-01

    Climate change is expected to influence the occurrence and magnitude of rainfall extremes and hence the flood risks in cities. Major impacts of an increased pluvial flood risk are expected to occur at hourly and sub-hourly resolutions. This makes convective storms the dominant rainfall type...... in relation to urban flooding. The present study focuses on high-resolution regional climate model (RCM) skill in simulating sub-daily rainfall extremes. Temporal and spatial characteristics of output from three different RCM simulations with 25 km resolution are compared to point rainfall extremes estimated...... from observed data. The applied RCM data sets represent two different models and two different types of forcing. Temporal changes in observed extreme point rainfall are partly reproduced by the RCM RACMO when forced by ERA40 re-analysis data. Two ECHAM forced simulations show similar increases...

  9. Body composition of the human lower extremity observed by computed tomography

    International Nuclear Information System (INIS)

    Suzuki, Masataka; Hasegawa, Makiko; Wu, Chung-Lei; Mimaru, Osamu

    1987-01-01

    Using computed tomography image, the body composition on the lower extremity were observed in 24 adult human (10 male, 14 female). CT image were taken at proximal section (upper a third on thigh), distal section (lower a third on thigh) and leg section (upper a third on leg), and the quantities determind from the images included the area of total cross-section, muscle, subcutaneous fat, connective tissue and bone in the each cross-section. The ratios of the each components to total area were surveyed. The age related changes and the differences between the three body types, which were defined by Rohrer's index, were discussed in both sexes. The following results were obtained. 1. The ratio of the each component to total sectional area in the three section levels was the highest in the muscle following in order of subcutaneous fat, connective tissue and bone in man generally. On the other hand, in female, the subcutaneous fat was higher than the muscle in the proximal section by A and C body types, but the muscle was higher than the subcutaneous fat by D body type in this section and by all body types in distal and leg sections. 2. Concerning the correlationship between the ratios of the components in the section and Rohrer's index or ages, they were in positive relation on the ratios of the subcutaneous fat and the connective tissue, and were in negative relation on the ratio of the muscle in the femoral section by male. 3. Decreasing with age of muscular area were found at under 50 ages in extensor, at 50 age in adductor and at about 60 ages in flexor on the proximal section, and at 50 age in extensor, after 55 age in adductor and at about 60 age in flexor on the distal section in man respectively. On the leg section, the decreasing tendency with ages were predominant in flexor by man and were found after 50 age by female too. (author)

  10. Extremity exams optimization for computed radiography; Otimizacao de exames de extremidade para radiologia computadorizada

    Energy Technology Data Exchange (ETDEWEB)

    Pavan, Ana Luiza M.; Alves, Allan Felipe F.; Velo, Alexandre F.; Miranda, Jose Ricardo A., E-mail: analuiza@ibb.unesp.br [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Departamento de Fisica e Biofisica; Pina, Diana R. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Faculdade de Medicina. Departamento de Doencas Tropicais e Diagnostico por Imagem

    2013-08-15

    The computed radiography (CR) has become the most used device for image acquisition, since its introduction in the 80s. The detection and early diagnosis, obtained through CR examinations, are important for the successful treatment of diseases of the hand. However, the norms used for optimization of these images are based on international protocols. Therefore, it is necessary to determine letters of radiographic techniques for CR system, which provides a safe medical diagnosis, with doses as low as reasonably achievable. The objective of this work is to develop an extremity homogeneous phantom to be used in the calibration process of radiographic techniques. In the construction process of the simulator, it has been developed a tissues' algorithm quantifier using Matlab®. In this process the average thickness was quantified from bone and soft tissues in the region of the hand of an anthropomorphic simulator as well as the simulators' material thickness corresponding (aluminum and Lucite) using technique of mask application and removal Gaussian histogram corresponding to tissues of interest. The homogeneous phantom was used to calibrate the x-ray beam. The techniques were implemented in a calibrated hand anthropomorphic phantom. The images were evaluated by specialists in radiology by the method of VGA. Skin entrance surface doses were estimated (SED) corresponding to each technique obtained with their respective tube charge. The thicknesses of simulators materials that constitute the homogeneous phantom determined in this study were 19.01 mm of acrylic and 0.81 mm of aluminum. A better picture quality with doses as low as reasonably achievable decreased dose and tube charge around 53.35% and 37.78% respectively, compared normally used by radiology diagnostic routine clinical of HCFMB-UNESP. (author)

  11. Materials and nanosystems : interdisciplinary computational modeling at multiple scales

    International Nuclear Information System (INIS)

    Huber, S.E.

    2014-01-01

    Over the last five decades, computer simulation and numerical modeling have become valuable tools complementing the traditional pillars of science, experiment and theory. In this thesis, several applications of computer-based simulation and modeling shall be explored in order to address problems and open issues in chemical and molecular physics. Attention shall be paid especially to the different degrees of interrelatedness and multiscale-flavor, which may - at least to some extent - be regarded as inherent properties of computational chemistry. In order to do so, a variety of computational methods are used to study features of molecular systems which are of relevance in various branches of science and which correspond to different spatial and/or temporal scales. Proceeding from small to large measures, first, an application in astrochemistry, the investigation of spectroscopic and energetic aspects of carbonic acid isomers shall be discussed. In this respect, very accurate and hence at the same time computationally very demanding electronic structure methods like the coupled-cluster approach are employed. These studies are followed by the discussion of an application in the scope of plasma-wall interaction which is related to nuclear fusion research. There, the interactions of atoms and molecules with graphite surfaces are explored using density functional theory methods. The latter are computationally cheaper than coupled-cluster methods and thus allow the treatment of larger molecular systems, but yield less accuracy and especially reduced error control at the same time. The subsequently presented exploration of surface defects at low-index polar zinc oxide surfaces, which are of interest in materials science and surface science, is another surface science application. The necessity to treat even larger systems of several hundreds of atoms requires the use of approximate density functional theory methods. Thin gold nanowires consisting of several thousands of

  12. Extreme rainfall, vulnerability and risk: a continental-scale assessment for South America

    Science.gov (United States)

    Vorosmarty, Charles J.; de Guenni, Lelys Bravo; Wollheim, Wilfred M.; Pellerin, Brian A.; Bjerklie, David M.; Cardoso, Manoel; D'Almeida, Cassiano; Colon, Lilybeth

    2013-01-01

    Extreme weather continues to preoccupy society as a formidable public safety concern bearing huge economic costs. While attention has focused on global climate change and how it could intensify key elements of the water cycle such as precipitation and river discharge, it is the conjunction of geophysical and socioeconomic forces that shapes human sensitivity and risks to weather extremes. We demonstrate here the use of high-resolution geophysical and population datasets together with documentary reports of rainfall-induced damage across South America over a multi-decadal, retrospective time domain (1960–2000). We define and map extreme precipitation hazard, exposure, affectedpopulations, vulnerability and risk, and use these variables to analyse the impact of floods as a water security issue. Geospatial experiments uncover major sources of risk from natural climate variability and population growth, with change in climate extremes bearing a minor role. While rural populations display greatest relative sensitivity to extreme rainfall, urban settings show the highest rates of increasing risk. In the coming decades, rapid urbanization will make South American cities the focal point of future climate threats but also an opportunity for reducing vulnerability, protecting lives and sustaining economic development through both traditional and ecosystem-based disaster risk management systems.

  13. Extreme rainfall, vulnerability and risk: a continental-scale assessment for South America.

    Science.gov (United States)

    Vörösmarty, Charles J; Bravo de Guenni, Lelys; Wollheim, Wilfred M; Pellerin, Brian; Bjerklie, David; Cardoso, Manoel; D'Almeida, Cassiano; Green, Pamela; Colon, Lilybeth

    2013-11-13

    Extreme weather continues to preoccupy society as a formidable public safety concern bearing huge economic costs. While attention has focused on global climate change and how it could intensify key elements of the water cycle such as precipitation and river discharge, it is the conjunction of geophysical and socioeconomic forces that shapes human sensitivity and risks to weather extremes. We demonstrate here the use of high-resolution geophysical and population datasets together with documentary reports of rainfall-induced damage across South America over a multi-decadal, retrospective time domain (1960-2000). We define and map extreme precipitation hazard, exposure, affectedpopulations, vulnerability and risk, and use these variables to analyse the impact of floods as a water security issue. Geospatial experiments uncover major sources of risk from natural climate variability and population growth, with change in climate extremes bearing a minor role. While rural populations display greatest relative sensitivity to extreme rainfall, urban settings show the highest rates of increasing risk. In the coming decades, rapid urbanization will make South American cities the focal point of future climate threats but also an opportunity for reducing vulnerability, protecting lives and sustaining economic development through both traditional and ecosystem-based disaster risk management systems.

  14. Sensitivity of extreme precipitation to temperature: the variability of scaling factors from a regional to local perspective

    Science.gov (United States)

    Schroeer, K.; Kirchengast, G.

    2018-06-01

    Potential increases in extreme rainfall induced hazards in a warming climate have motivated studies to link precipitation intensities to temperature. Increases exceeding the Clausius-Clapeyron (CC) rate of 6-7%/°C-1 are seen in short-duration, convective, high-percentile rainfall at mid latitudes, but the rates of change cease or revert at regionally variable threshold temperatures due to moisture limitations. It is unclear, however, what these findings mean in term of the actual risk of extreme precipitation on a regional to local scale. When conditioning precipitation intensities on local temperatures, key influences on the scaling relationship such as from the annual cycle and regional weather patterns need better understanding. Here we analyze these influences, using sub-hourly to daily precipitation data from a dense network of 189 stations in south-eastern Austria. We find that the temperature sensitivities in the mountainous western region are lower than in the eastern lowlands. This is due to the different weather patterns that cause extreme precipitation in these regions. Sub-hourly and hourly intensities intensify at super-CC and CC-rates, respectively, up to temperatures of about 17 °C. However, we also find that, because of the regional and seasonal variability of the precipitation intensities, a smaller scaling factor can imply a larger absolute change in intensity. Our insights underline that temperature precipitation scaling requires careful interpretation of the intent and setting of the study. When this is considered, conditional scaling factors can help to better understand which influences control the intensification of rainfall with temperature on a regional scale.

  15. Understanding extreme sea levels for broad-scale coastal impact and adaptation analysis

    NARCIS (Netherlands)

    Wahl, T.; Haigh, I.D.; Nicholls, R.J.; Arns, A.; Dangendorf, S.; Hinkel, J.; Slangen, A.B.A.

    2017-01-01

    One of the main consequences of mean sea level rise (SLR) on human settlements is an increase in flood risk due to an increase in the intensity and frequency of extreme sea levels (ESL). While substantial research efforts are directed towards quantifying projections and uncertainties of future

  16. SCALE-4 [Standardized Computer Analyses for Licensing Evaluation]: An improved computational system for spent-fuel cask analysis

    International Nuclear Information System (INIS)

    Parks, C.V.

    1989-01-01

    The purpose of this paper is to provide specific information regarding improvements available with Version 4.0 of the SCALE system and discuss the future of SCALE within the current computing and regulatory environment. The emphasis focuses on the improvements in SCALE-4 over that available in SCALE-3. 10 refs., 1 fig., 1 tab

  17. Computational optimization of catalyst distributions at the nano-scale

    International Nuclear Information System (INIS)

    Ström, Henrik

    2017-01-01

    Highlights: • Macroscopic data sampled from a DSMC simulation contain statistical scatter. • Simulated annealing is evaluated as an optimization algorithm with DSMC. • Proposed method is more robust than a gradient search method. • Objective function uses the mass transfer rate instead of the reaction rate. • Combined algorithm is more efficient than a macroscopic overlay method. - Abstract: Catalysis is a key phenomenon in a great number of energy processes, including feedstock conversion, tar cracking, emission abatement and optimizations of energy use. Within heterogeneous, catalytic nano-scale systems, the chemical reactions typically proceed at very high rates at a gas–solid interface. However, the statistical uncertainties characteristic of molecular processes pose efficiency problems for computational optimizations of such nano-scale systems. The present work investigates the performance of a Direct Simulation Monte Carlo (DSMC) code with a stochastic optimization heuristic for evaluations of an optimal catalyst distribution. The DSMC code treats molecular motion with homogeneous and heterogeneous chemical reactions in wall-bounded systems and algorithms have been devised that allow optimization of the distribution of a catalytically active material within a three-dimensional duct (e.g. a pore). The objective function is the outlet concentration of computational molecules that have interacted with the catalytically active surface, and the optimization method used is simulated annealing. The application of a stochastic optimization heuristic is shown to be more efficient within the present DSMC framework than using a macroscopic overlay method. Furthermore, it is shown that the performance of the developed method is superior to that of a gradient search method for the current class of problems. Finally, the advantages and disadvantages of different types of objective functions are discussed.

  18. Scale dependency of regional climate modeling of current and future climate extremes in Germany

    Science.gov (United States)

    Tölle, Merja H.; Schefczyk, Lukas; Gutjahr, Oliver

    2017-11-01

    A warmer climate is projected for mid-Europe, with less precipitation in summer, but with intensified extremes of precipitation and near-surface temperature. However, the extent and magnitude of such changes are associated with creditable uncertainty because of the limitations of model resolution and parameterizations. Here, we present the results of convection-permitting regional climate model simulations for Germany integrated with the COSMO-CLM using a horizontal grid spacing of 1.3 km, and additional 4.5- and 7-km simulations with convection parameterized. Of particular interest is how the temperature and precipitation fields and their extremes depend on the horizontal resolution for current and future climate conditions. The spatial variability of precipitation increases with resolution because of more realistic orography and physical parameterizations, but values are overestimated in summer and over mountain ridges in all simulations compared to observations. The spatial variability of temperature is improved at a resolution of 1.3 km, but the results are cold-biased, especially in summer. The increase in resolution from 7/4.5 km to 1.3 km is accompanied by less future warming in summer by 1 ∘C. Modeled future precipitation extremes will be more severe, and temperature extremes will not exclusively increase with higher resolution. Although the differences between the resolutions considered (7/4.5 km and 1.3 km) are small, we find that the differences in the changes in extremes are large. High-resolution simulations require further studies, with effective parameterizations and tunings for different topographic regions. Impact models and assessment studies may benefit from such high-resolution model results, but should account for the impact of model resolution on model processes and climate change.

  19. Extreme weather events in southern Germany - Climatological risk and development of a large-scale identification procedure

    Science.gov (United States)

    Matthies, A.; Leckebusch, G. C.; Rohlfing, G.; Ulbrich, U.

    2009-04-01

    Extreme weather events such as thunderstorms, hail and heavy rain or snowfall can pose a threat to human life and to considerable tangible assets. Yet there is a lack of knowledge about present day climatological risk and its economic effects, and its changes due to rising greenhouse gas concentrations. Therefore, parts of economy particularly sensitve to extreme weather events such as insurance companies and airports require regional risk-analyses, early warning and prediction systems to cope with such events. Such an attempt is made for southern Germany, in close cooperation with stakeholders. Comparing ERA40 and station data with impact records of Munich Re and Munich Airport, the 90th percentile was found to be a suitable threshold for extreme impact relevant precipitation events. Different methods for the classification of causing synoptic situations have been tested on ERA40 reanalyses. An objective scheme for the classification of Lamb's circulation weather types (CWT's) has proved to be most suitable for correct classification of the large-scale flow conditions. Certain CWT's have been turned out to be prone to heavy precipitation or on the other side to have a very low risk of such events. Other large-scale parameters are tested in connection with CWT's to find out a combination that has the highest skill to identify extreme precipitation events in climate model data (ECHAM5 and CLM). For example vorticity advection in 700 hPa shows good results, but assumes knowledge of regional orographic particularities. Therefore ongoing work is focused on additional testing of parameters that indicate deviations of a basic state of the atmosphere like the Eady Growth Rate or the newly developed Dynamic State Index. Evaluation results will be used to estimate the skill of the regional climate model CLM concerning the simulation of frequency and intensity of the extreme weather events. Data of the A1B scenario (2000-2050) will be examined for a possible climate change

  20. Simulation of large scale air detritiation operations by computer modeling and bench-scale experimentation

    International Nuclear Information System (INIS)

    Clemmer, R.G.; Land, R.H.; Maroni, V.A.; Mintz, J.M.

    1978-01-01

    Although some experience has been gained in the design and construction of 0.5 to 5 m 3 /s air-detritiation systems, little information is available on the performance of these systems under realistic conditions. Recently completed studies at ANL have attempted to provide some perspective on this subject. A time-dependent computer model was developed to study the effects of various reaction and soaking mechanisms that could occur in a typically-sized fusion reactor building (approximately 10 5 m 3 ) following a range of tritium releases (2 to 200 g). In parallel with the computer study, a small (approximately 50 liter) test chamber was set up to investigate cleanup characteristics under conditions which could also be simulated with the computer code. Whereas results of computer analyses indicated that only approximately 10 -3 percent of the tritium released to an ambient enclosure should be converted to tritiated water, the bench-scale experiments gave evidence of conversions to water greater than 1%. Furthermore, although the amounts (both calculated and observed) of soaked-in tritium are usually only a very small fraction of the total tritium release, the soaked tritium is significant, in that its continuous return to the enclosure extends the cleanup time beyond the predicted value in the absence of any soaking mechanisms

  1. Scaling to Nanotechnology Limits with the PIMS Computer Architecture and a new Scaling Rule

    Energy Technology Data Exchange (ETDEWEB)

    Debenedictis, Erik P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    We describe a new approach to computing that moves towards the limits of nanotechnology using a newly formulated sc aling rule. This is in contrast to the current computer industry scali ng away from von Neumann's original computer at the rate of Moore's Law. We extend Moore's Law to 3D, which l eads generally to architectures that integrate logic and memory. To keep pow er dissipation cons tant through a 2D surface of the 3D structure requires using adiabatic principles. We call our newly proposed architecture Processor In Memory and Storage (PIMS). We propose a new computational model that integrates processing and memory into "tiles" that comprise logic, memory/storage, and communications functions. Since the programming model will be relatively stable as a system scales, programs repr esented by tiles could be executed in a PIMS system built with today's technology or could become the "schematic diagram" for implementation in an ultimate 3D nanotechnology of the future. We build a systems software approach that offers advantages over and above the technological and arch itectural advantages. Firs t, the algorithms may be more efficient in the conventional sens e of having fewer steps. Second, the algorithms may run with higher power efficiency per operation by being a better match for the adiabatic scaling ru le. The performance analysis based on demonstrated ideas in physical science suggests 80,000 x improvement in cost per operation for the (arguably) gene ral purpose function of emulating neurons in Deep Learning.

  2. Contribution of large-scale circulation anomalies to changes in extreme precipitation frequency in the United States

    International Nuclear Information System (INIS)

    Yu, Lejiang; Zhong, Shiyuan; Pei, Lisi; Bian, Xindi; Heilman, Warren E

    2016-01-01

    The mean global climate has warmed as a result of the increasing emission of greenhouse gases induced by human activities. This warming is considered the main reason for the increasing number of extreme precipitation events in the US. While much attention has been given to extreme precipitation events occurring over several days, which are usually responsible for severe flooding over a large region, little is known about how extreme precipitation events that cause flash flooding and occur at sub-daily time scales have changed over time. Here we use the observed hourly precipitation from the North American Land Data Assimilation System Phase 2 forcing datasets to determine trends in the frequency of extreme precipitation events of short (1 h, 3 h, 6 h, 12 h and 24 h) duration for the period 1979–2013. The results indicate an increasing trend in the central and eastern US. Over most of the western US, especially the Southwest and the Intermountain West, the trends are generally negative. These trends can be largely explained by the interdecadal variability of the Pacific Decadal Oscillation and Atlantic Multidecadal Oscillation (AMO), with the AMO making a greater contribution to the trends in both warm and cold seasons. (letter)

  3. Reliability, validity, and sensitivity to change of the lower extremity functional scale in individuals affected by stroke.

    Science.gov (United States)

    Verheijde, Joseph L; White, Fred; Tompkins, James; Dahl, Peder; Hentz, Joseph G; Lebec, Michael T; Cornwall, Mark

    2013-12-01

    To investigate reliability, validity, and sensitivity to change of the Lower Extremity Functional Scale (LEFS) in individuals affected by stroke. The secondary objective was to test the validity and sensitivity of a single-item linear analog scale (LAS) of function. Prospective cohort reliability and validation study. A single rehabilitation department in an academic medical center. Forty-three individuals receiving neurorehabilitation for lower extremity dysfunction after stroke were studied. Their ages ranged from 32 to 95 years, with a mean of 70 years; 77% were men. Test-retest reliability was assessed by calculating the classical intraclass correlation coefficient, and the Bland-Altman limits of agreement. Validity was assessed by calculating the Pearson correlation coefficient between the instruments. Sensitivity to change was assessed by comparing baseline scores with end of treatment scores. Measurements were taken at baseline, after 1-3 days, and at 4 and 8 weeks. The LEFS, Short-Form-36 Physical Function Scale, Berg Balance Scale, Six-Minute Walk Test, Five-Meter Walk Test, Timed Up-and-Go test, and the LAS of function were used. The test-retest reliability of the LEFS was found to be excellent (ICC = 0.96). Correlated with the 6 other measures of function studied, the validity of the LEFS was found to be moderate to high (r = 0.40-0.71). Regarding the sensitivity to change, the mean LEFS scores from baseline to study end increased 1.2 SD and for LAS 1.1 SD. LEFS exhibits good reliability, validity, and sensitivity to change in patients with lower extremity impairments secondary to stroke. Therefore, the LEFS can be a clinically efficient outcome measure in the rehabilitation of patients with subacute stroke. The LAS is shown to be a time-saving and reasonable option to track changes in a patient's functional status. Copyright © 2013 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  4. Computational approach on PEB process in EUV resist: multi-scale simulation

    Science.gov (United States)

    Kim, Muyoung; Moon, Junghwan; Choi, Joonmyung; Lee, Byunghoon; Jeong, Changyoung; Kim, Heebom; Cho, Maenghyo

    2017-03-01

    For decades, downsizing has been a key issue for high performance and low cost of semiconductor, and extreme ultraviolet lithography is one of the promising candidates to achieve the goal. As a predominant process in extreme ultraviolet lithography on determining resolution and sensitivity, post exposure bake has been mainly studied by experimental groups, but development of its photoresist is at the breaking point because of the lack of unveiled mechanism during the process. Herein, we provide theoretical approach to investigate underlying mechanism on the post exposure bake process in chemically amplified resist, and it covers three important reactions during the process: acid generation by photo-acid generator dissociation, acid diffusion, and deprotection. Density functional theory calculation (quantum mechanical simulation) was conducted to quantitatively predict activation energy and probability of the chemical reactions, and they were applied to molecular dynamics simulation for constructing reliable computational model. Then, overall chemical reactions were simulated in the molecular dynamics unit cell, and final configuration of the photoresist was used to predict the line edge roughness. The presented multiscale model unifies the phenomena of both quantum and atomic scales during the post exposure bake process, and it will be helpful to understand critical factors affecting the performance of the resulting photoresist and design the next-generation material.

  5. Probabilistic Approach to Enable Extreme-Scale Simulations under Uncertainty and System Faults. Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Knio, Omar [Duke Univ., Durham, NC (United States). Dept. of Mechanical Engineering and Materials Science

    2017-05-05

    The current project develops a novel approach that uses a probabilistic description to capture the current state of knowledge about the computational solution. To effectively spread the computational effort over multiple nodes, the global computational domain is split into many subdomains. Computational uncertainty in the solution translates into uncertain boundary conditions for the equation system to be solved on those subdomains, and many independent, concurrent subdomain simulations are used to account for this bound- ary condition uncertainty. By relying on the fact that solutions on neighboring subdomains must agree with each other, a more accurate estimate for the global solution can be achieved. Statistical approaches in this update process make it possible to account for the effect of system faults in the probabilistic description of the computational solution, and the associated uncertainty is reduced through successive iterations. By combining all of these elements, the probabilistic reformulation allows splitting the computational work over very many independent tasks for good scalability, while being robust to system faults.

  6. Rain Characteristics and Large-Scale Environments of Precipitation Objects with Extreme Rain Volumes from TRMM Observations

    Science.gov (United States)

    Zhou, Yaping; Lau, William K M.; Liu, Chuntao

    2013-01-01

    This study adopts a "precipitation object" approach by using 14 years of Tropical Rainfall Measuring Mission (TRMM) Precipitation Feature (PF) and National Centers for Environmental Prediction (NCEP) reanalysis data to study rainfall structure and environmental factors associated with extreme heavy rain events. Characteristics of instantaneous extreme volumetric PFs are examined and compared to those of intermediate and small systems. It is found that instantaneous PFs exhibit a much wider scale range compared to the daily gridded precipitation accumulation range. The top 1% of the rainiest PFs contribute over 55% of total rainfall and have 2 orders of rain volume magnitude greater than those of the median PFs. We find a threshold near the top 10% beyond which the PFs grow exponentially into larger, deeper, and colder rain systems. NCEP reanalyses show that midlevel relative humidity and total precipitable water increase steadily with increasingly larger PFs, along with a rapid increase of 500 hPa upward vertical velocity beyond the top 10%. This provides the necessary moisture convergence to amplify and sustain the extreme events. The rapid increase in vertical motion is associated with the release of convective available potential energy (CAPE) in mature systems, as is evident in the increase in CAPE of PFs up to 10% and the subsequent dropoff. The study illustrates distinct stages in the development of an extreme rainfall event including: (1) a systematic buildup in large-scale temperature and moisture, (2) a rapid change in rain structure, (3) explosive growth of the PF size, and (4) a release of CAPE before the demise of the event.

  7. Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection

    International Nuclear Information System (INIS)

    Chai, Kil-Byoung; Bellan, Paul M.

    2013-01-01

    An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10 6 frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs

  8. Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection

    Energy Technology Data Exchange (ETDEWEB)

    Chai, Kil-Byoung; Bellan, Paul M. [Applied Physics, Caltech, 1200 E. California Boulevard, Pasadena, California 91125 (United States)

    2013-12-15

    An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.

  9. Changes in daily climate extremes in China and their connection to the large scale atmospheric circulation during 1961-2003

    Energy Technology Data Exchange (ETDEWEB)

    You, Qinglong [Institute of Tibetan Plateau Research, Chinese Academy of Sciences (CAS), Laboratory of Tibetan Environment Changes and Land Surface Processes, Beijing (China); Friedrich-Schiller University Jena, Department of Geoinformatics, Jena (Germany); Graduate University of Chinese Academy of Sciences, Beijing (China); Kang, Shichang [Institute of Tibetan Plateau Research, Chinese Academy of Sciences (CAS), Laboratory of Tibetan Environment Changes and Land Surface Processes, Beijing (China); State Key Laboratory of Cryospheric Science, Chinese Academy of Sciences, Lanzhou (China); Aguilar, Enric [Universitat Rovirai Virgili de Tarragona, Climate Change Research Group, Geography Unit, Tarragona (Spain); Pepin, Nick [University of Portsmouth, Department of Geography, Portsmouth (United Kingdom); Fluegel, Wolfgang-Albert [Friedrich-Schiller University Jena, Department of Geoinformatics, Jena (Germany); Yan, Yuping [National Climate Center, Beijing (China); Xu, Yanwei; Huang, Jie [Institute of Tibetan Plateau Research, Chinese Academy of Sciences (CAS), Laboratory of Tibetan Environment Changes and Land Surface Processes, Beijing (China); Graduate University of Chinese Academy of Sciences, Beijing (China); Zhang, Yongjun [Institute of Tibetan Plateau Research, Chinese Academy of Sciences (CAS), Laboratory of Tibetan Environment Changes and Land Surface Processes, Beijing (China)

    2011-06-15

    negative magnitudes. This is inconsistent with changes of water vapor flux calculated from NCEP/NCAR reanalysis. Large scale atmospheric circulation changes derived from NCEP/NCAR reanalysis grids show that a strengthening anticyclonic circulation, increasing geopotential height and rapid warming over the Eurasian continent have contributed to the changes in climate extremes in China. (orig.)

  10. Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    Energy Technology Data Exchange (ETDEWEB)

    Duque, Earl P.N. [J.M. Smith International, LLC, Rutherford, NJ (United States). DBA Intelligent Light; Whitlock, Brad J. [J.M. Smith International, LLC, Rutherford, NJ (United States). DBA Intelligent Light

    2017-08-25

    High performance computers have for many years been on a trajectory that gives them extraordinary compute power with the addition of more and more compute cores. At the same time, other system parameters such as the amount of memory per core and bandwidth to storage have remained constant or have barely increased. This creates an imbalance in the computer, giving it the ability to compute a lot of data that it cannot reasonably save out due to time and storage constraints. While technologies have been invented to mitigate this problem (burst buffers, etc.), software has been adapting to employ in situ libraries which perform data analysis and visualization on simulation data while it is still resident in memory. This avoids the need to ever have to pay the costs of writing many terabytes of data files. Instead, in situ enables the creation of more concentrated data products such as statistics, plots, and data extracts, which are all far smaller than the full-sized volume data. With the increasing popularity of in situ, multiple in situ infrastructures have been created, each with its own mechanism for integrating with a simulation. To make it easier to instrument a simulation with multiple in situ infrastructures and include custom analysis algorithms, this project created the SENSEI framework.

  11. Changes in intensity of precipitation extremes in Romania on very hight temporal scale and implications on the validity of the Clausius-Clapeyron relation

    Science.gov (United States)

    Busuioc, Aristita; Baciu, Madalina; Breza, Traian; Dumitrescu, Alexandru; Stoica, Cerasela; Baghina, Nina

    2016-04-01

    Many observational, theoretical and based on climate model simulation studies suggested that warmer climates lead to more intense precipitation events, even when the total annual precipitation is slightly reduced. In this way, it was suggested that extreme precipitation events may increase at Clausius-Clapeyron (CC) rate under global warming and constraint of constant relative humidity. However, recent studies show that the relationship between extreme rainfall intensity and atmospheric temperature is much more complex than would be suggested by the CC relationship and is mainly dependent on precipitation temporal resolution, region, storm type and whether the analysis is conducted on storm events rather than fixed data. The present study presents the dependence between the very hight temporal scale extreme rainfall intensity and daily temperatures, with respect to the verification of the CC relation. To solve this objective, the analysis is conducted on rainfall event rather than fixed interval using the rainfall data based on graphic records including intensities (mm/min.) calculated over each interval with permanent intensity per minute. The annual interval with available a such data (April to October) is considered at 5 stations over the interval 1950-2007. For Bucuresti-Filaret station the analysis is extended over the longer interval (1898-2007). For each rainfall event, the maximum intensity (mm/min.) is retained and these time series are considered for the further analysis (abbreviated in the following as IMAX). The IMAX data were divided based on the daily mean temperature into bins 2oC - wide. The bins with less than 100 values were excluded. The 90th, 99th and 99.9th percentiles were computed from the binned data using the empirical distribution and their variability has been compared to the CC scaling (e.g. exponential relation given by a 7% increase per temperature degree rise). The results show a dependence close to double the CC relation for

  12. PuLP/XtraPuLP : Partitioning Tools for Extreme-Scale Graphs

    Energy Technology Data Exchange (ETDEWEB)

    2017-09-21

    PuLP/XtraPulp is software for partitioning graphs from several real-world problems. Graphs occur in several places in real world from road networks, social networks and scientific simulations. For efficient parallel processing these graphs have to be partitioned (split) with respect to metrics such as computation and communication costs. Our software allows such partitioning for massive graphs.

  13. Theoretical science and the future of large scale computing

    International Nuclear Information System (INIS)

    Wilson, K.G.

    1983-01-01

    The author describes the application of computer simulation to physical problems. In this connection the FORTRAN language is considered. Furthermore the application of computer networks is described whereby the processing of experimental data is considered. (HSI).

  14. Spatial and temporal accuracy of asynchrony-tolerant finite difference schemes for partial differential equations at extreme scales

    Science.gov (United States)

    Kumari, Komal; Donzis, Diego

    2017-11-01

    Highly resolved computational simulations on massively parallel machines are critical in understanding the physics of a vast number of complex phenomena in nature governed by partial differential equations. Simulations at extreme levels of parallelism present many challenges with communication between processing elements (PEs) being a major bottleneck. In order to fully exploit the computational power of exascale machines one needs to devise numerical schemes that relax global synchronizations across PEs. This asynchronous computations, however, have a degrading effect on the accuracy of standard numerical schemes.We have developed asynchrony-tolerant (AT) schemes that maintain order of accuracy despite relaxed communications. We show, analytically and numerically, that these schemes retain their numerical properties with multi-step higher order temporal Runge-Kutta schemes. We also show that for a range of optimized parameters,the computation time and error for AT schemes is less than their synchronous counterpart. Stability of the AT schemes which depends upon history and random nature of delays, are also discussed. Support from NSF is gratefully acknowledged.

  15. Final report for “Extreme-scale Algorithms and Solver Resilience”

    Energy Technology Data Exchange (ETDEWEB)

    Gropp, William Douglas [Univ. of Illinois, Urbana-Champaign, IL (United States)

    2017-06-30

    This is a joint project with principal investigators at Oak Ridge National Laboratory, Sandia National Laboratories, the University of California at Berkeley, and the University of Tennessee. Our part of the project involves developing performance models for highly scalable algorithms and the development of latency tolerant iterative methods. During this project, we extended our performance models for the Multigrid method for solving large systems of linear equations and conducted experiments with highly scalable variants of conjugate gradient methods that avoid blocking synchronization. In addition, we worked with the other members of the project on alternative techniques for resilience and reproducibility. We also presented an alternative approach for reproducible dot-products in parallel computations that performs almost as well as the conventional approach by separating the order of computation from the details of the decomposition of vectors across the processes.

  16. Extreme hydrometeorological events in the Peruvian Central Andes during austral summer and their relationship with the large-scale circulation

    Science.gov (United States)

    Sulca, Juan C.

    In this Master's dissertation, atmospheric circulation patterns associated with extreme hydrometeorological events in the Mantaro Basin, Peruvian Central Andes, and their teleconnections during the austral summer (December-January-February-March) are addressed. Extreme rainfall events in the Mantaro basin are related to variations of the large-scale circulation as indicated by the changing strength of the Bolivian High-Nordeste Low (BH-NL) system. Dry (wet) spells are associated with a weakening (strengthening) of the BH-NL system and reduced (enhanced) influx of moist air from the lowlands to the east due to strengthened westerly (easterly) wind anomalies at mid- and upper-tropospheric levels. At the same time extreme rainfall events of the opposite sign occur over northeastern Brazil (NEB) due to enhanced (inhibited) convective activity in conjunction with a strengthened (weakened) Nordeste Low. Cold episodes in the Mantaro Basin are grouped in three types: weak, strong and extraordinary cold episodes. Weak and strong cold episodes in the MB are mainly associated with a weakening of the BH-NL system due to tropical-extratropical interactions. Both types of cold episodes are associated with westerly wind anomalies at mid- and upper-tropospheric levels aloft the Peruvian Central Andes, which inhibit the influx of humid air masses from the lowlands to the east and hence limit the potential for development of convective cloud cover. The resulting clear sky conditions cause nighttime temperatures to drop, leading to cold extremes below the 10-percentile. Extraordinary cold episodes in the MB are associated with cold and dry polar air advection at all tropospheric levels toward the central Peruvian Andes. Therefore, weak and strong cold episodes in the MB appear to be caused by radiative cooling associated with reduced cloudiness, rather than cold air advection, while the latter plays an important role for extraordinary cold episodes only.

  17. Design considerations of 10 kW-scale, extreme ultraviolet SASE FEL for lithography

    CERN Document Server

    Pagani, C; Schneidmiller, E A; Yurkov, M V

    2001-01-01

    The semiconductor industry growth is driven to a large extent by steady advancements in microlithography. According to the newly updated industry road map, the 70 nm generation is anticipated to be available in the year 2008. However, the path to get there is not clear. The problem of construction of extreme ultraviolet (EUV) quantum lasers for lithography is still unsolved: progress in this field is rather moderate and we cannot expect a significant breakthrough in the near future. Nevertheless, there is clear path for optical lithography to take us to sub-100 nm dimensions. Theoretical and experimental work in Self-Amplified Spontaneous Emission (SASE) Free Electron Lasers (FEL) physics and the physics of superconducting linear accelerators over the last 10 years has pointed to the possibility of the generation of high-power optical beams with laser-like characteristics in the EUV spectral range. Recently, there have been important advances in demonstrating a high-gain SASE FEL at 100 nm wavelength (J. Andr...

  18. Computer-assisted upper extremity training using interactive biking exercise (iBikE) platform.

    Science.gov (United States)

    Jeong, In Cheol; Finkelstein, Joseph

    2012-01-01

    Upper extremity exercise training has been shown to improve clinical outcomes in different chronic health conditions. Arm-operated bicycles are frequently used to facilitate upper extremity training however effective use of these devices at patient homes is hampered by lack of remote connectivity with clinical rehabilitation team, inability to monitor exercise progress in real time using simple graphical representation, and absence of an alert system which would prevent exertion levels exceeding those approved by the clinical rehabilitation team. We developed an interactive biking exercise (iBikE) platform aimed at addressing these limitations. The platform uses a miniature wireless 3-axis accelerometer mounted on a patient wrist that transmits the cycling acceleration data to a laptop. The laptop screen presents an exercise dashboard to the patient in real time allowing easy graphical visualization of exercise progress and presentation of exercise parameters in relation to prescribed targets. The iBikE platform is programmed to alert the patient when exercise intensity exceeds the levels recommended by the patient care provider. The iBikE platform has been tested in 7 healthy volunteers (age range: 26-50 years) and shown to reliably reflect exercise progress and to generate alerts at pre-setup levels. Implementation of remote connectivity with patient rehabilitation team is warranted for future extension and evaluation efforts.

  19. Computed tomographic myelography characteristics of spinal cord atrophy in juvenile muscular atrophy of the upper extremity

    International Nuclear Information System (INIS)

    Hirabuki, Norio; Mitomo, Masanori; Miura, Takashi; Hashimoto, Tsutomu; Kawai, Ryuji; Kozuka, Takahiro

    1991-01-01

    Although atrophy of the lower cervical and upper thoracic cord in juvenile muscular atrophy of distal upper extremity has been reported, the atrophic patterns of the cord, especially in the transverse section, have not been studied extensively. The aim of this study is to clarify the atrophic patterns of the cord by CT myelography (CTM) and to discuss the pathogenesis of cord atrophy. Sixteen patients with juvenile muscular atrophy of distal upper extremity were examined by CTM. Atrophy of the lower cervical and upper thoracic cord, consistent with the segmental weakness, was seen in all patients. Flattening of the ventral convexity was a characteristic atrophic pattern of the cord. Bilateral cord atrophy was commonly observed; 8/12 patients with unilateral clinical form and all 4 patients with bilateral form showed bilateral cord atrophy with dominance on the clinical side. There was no correlation between the degree of cord atrophy and duration of symptoms. Flattening of the ventral convexity, associated with purely motor disturbances, reflects selective atrophy of the anterior horns in the cord, which is attributable to chronic ischemia. Cord atrophy proved to precede clinical manifestations. The characteristic atrophy of the cord provides useful information to confirm the diagnosis without long-term observation. (author). 21 refs.; 3 figs.; 2 tabs

  20. Probability of extreme interference levels computed from reliability approaches: application to transmission lines with uncertain parameters

    International Nuclear Information System (INIS)

    Larbi, M.; Besnier, P.; Pecqueux, B.

    2014-01-01

    This paper deals with the risk analysis of an EMC default using a statistical approach. It is based on reliability methods from probabilistic engineering mechanics. A computation of probability of failure (i.e. probability of exceeding a threshold) of an induced current by crosstalk is established by taking into account uncertainties on input parameters influencing levels of interference in the context of transmission lines. The study has allowed us to evaluate the probability of failure of the induced current by using reliability methods having a relative low computational cost compared to Monte Carlo simulation. (authors)

  1. Compiling for Novel Scratch Pad Memory based Multicore Architectures for Extreme Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Shrivastava, Aviral

    2016-02-05

    The objective of this proposal is to develop tools and techniques (in the compiler) to manage data of a task and communication among tasks on the scratch pad memory (SPM) of the core, so that any application (a set of tasks) can be executed efficiently on an SPM based manycore architecture.

  2. Discontinuous Galerkin method for computing gravitational waveforms from extreme mass ratio binaries

    International Nuclear Information System (INIS)

    Field, Scott E; Hesthaven, Jan S; Lau, Stephen R

    2009-01-01

    Gravitational wave emission from extreme mass ratio binaries (EMRBs) should be detectable by the joint NASA-ESA LISA project, spurring interest in analytical and numerical methods for investigating EMRBs. We describe a discontinuous Galerkin (dG) method for solving the distributionally forced 1+1 wave equations which arise when modeling EMRBs via the perturbation theory of Schwarzschild black holes. Despite the presence of jump discontinuities in the relevant polar and axial gravitational 'master functions', our dG method achieves global spectral accuracy, provided that we know the instantaneous position, velocity and acceleration of the small particle. Here these variables are known, since we assume that the particle follows a timelike geodesic of the Schwarzschild geometry. We document the results of several numerical experiments testing our method, and in our concluding section discuss the possible inclusion of gravitational self-force effects.

  3. A large-scale evaluation of computational protein function prediction

    NARCIS (Netherlands)

    Radivojac, P.; Clark, W.T.; Oron, T.R.; Schnoes, A.M.; Wittkop, T.; Kourmpetis, Y.A.I.; Dijk, van A.D.J.; Friedberg, I.

    2013-01-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be

  4. A computer literacy scale for newly enrolled nursing college students: development and validation.

    Science.gov (United States)

    Lin, Tung-Cheng

    2011-12-01

    Increasing application and use of information systems and mobile technologies in the healthcare industry require increasing nurse competency in computer use. Computer literacy is defined as basic computer skills, whereas computer competency is defined as the computer skills necessary to accomplish job tasks. Inadequate attention has been paid to computer literacy and computer competency scale validity. This study developed a computer literacy scale with good reliability and validity and investigated the current computer literacy of newly enrolled students to develop computer courses appropriate to students' skill levels and needs. This study referenced Hinkin's process to develop a computer literacy scale. Participants were newly enrolled first-year undergraduate students, with nursing or nursing-related backgrounds, currently attending a course entitled Information Literacy and Internet Applications. Researchers examined reliability and validity using confirmatory factor analysis. The final version of the developed computer literacy scale included six constructs (software, hardware, multimedia, networks, information ethics, and information security) and 22 measurement items. Confirmatory factor analysis showed that the scale possessed good content validity, reliability, convergent validity, and discriminant validity. This study also found that participants earned the highest scores for the network domain and the lowest score for the hardware domain. With increasing use of information technology applications, courses related to hardware topic should be increased to improve nurse problem-solving abilities. This study recommends that emphases on word processing and network-related topics may be reduced in favor of an increased emphasis on database, statistical software, hospital information systems, and information ethics.

  5. Musculoskeletal disorders of the neck and upper extremity in computer workers

    International Nuclear Information System (INIS)

    Rasool, A.; Bashir, M.S.; Noor, R.

    2017-01-01

    To evaluate the prevalence of Work related musculoskeletal disorders (WRMSD) for computer office worker, who work 6 hours or more per day on the computer. Methodology: This cross sectional study was conducted in different government and private banks and mobile franchises. Demographic information, work ergonomics and relevant data were collected by using a standardized questionnaire after obtaining signed consent from them. Data were analyzed through SPSS version 16. Results: Out of 150 potential subjects, 128 were returned completed questionnaires with the response rate of 85%. Age ranged between 25-35 years. Neck associated complaints were in 47.42% males and in 67.74% females. Shoulder complaints were 45.36% in males and 77.42% in females. Hand complaints were 20.62% in males and 54.84% in females. Conclusion: The prevalence rate of WRMSD was higher among females than males. (author)

  6. A projected preconditioned conjugate gradient algorithm for computing many extreme eigenpairs of a Hermitian matrix

    International Nuclear Information System (INIS)

    Vecharynski, Eugene; Yang, Chao; Pask, John E.

    2015-01-01

    We present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimal block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer

  7. Computational methods for criticality safety analysis within the scale system

    International Nuclear Information System (INIS)

    Parks, C.V.; Petrie, L.M.; Landers, N.F.; Bucholz, J.A.

    1986-01-01

    The criticality safety analysis capabilities within the SCALE system are centered around the Monte Carlo codes KENO IV and KENO V.a, which are both included in SCALE as functional modules. The XSDRNPM-S module is also an important tool within SCALE for obtaining multiplication factors for one-dimensional system models. This paper reviews the features and modeling capabilities of these codes along with their implementation within the Criticality Safety Analysis Sequences (CSAS) of SCALE. The CSAS modules provide automated cross-section processing and user-friendly input that allow criticality safety analyses to be done in an efficient and accurate manner. 14 refs., 2 figs., 3 tabs

  8. Usability Evaluation of Notebook Computers and Cellular Telephones Among Users with Visual and Upper Extremity Disabilities

    OpenAIRE

    Mooney, Aaron Michael

    2002-01-01

    Information appliances such as notebook computers and cellular telephones are becoming integral to the lives of many. These devices facilitate a variety of communication tasks, and are used for employment, education, and entertainment. Those with disabilities, however, have limited access to these devices, due in part to product designs that do not consider their special needs. A usability evaluation can help identify the needs and difficulties those with disabilities have when using a pro...

  9. The Convergence of High Performance Computing and Large Scale Data Analytics

    Science.gov (United States)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  10. Dual-Energy Computed Tomography Angiography of the Lower Extremity Runoff: Impact of Noise-Optimized Virtual Monochromatic Imaging on Image Quality and Diagnostic Accuracy.

    Science.gov (United States)

    Wichmann, Julian L; Gillott, Matthew R; De Cecco, Carlo N; Mangold, Stefanie; Varga-Szemes, Akos; Yamada, Ricardo; Otani, Katharina; Canstein, Christian; Fuller, Stephen R; Vogl, Thomas J; Todoran, Thomas M; Schoepf, U Joseph

    2016-02-01

    The aim of this study was to evaluate the impact of a noise-optimized virtual monochromatic imaging algorithm (VMI+) on image quality and diagnostic accuracy at dual-energy computed tomography angiography (CTA) of the lower extremity runoff. This retrospective Health Insurance Portability and Accountability Act-compliant study was approved by the local institutional review board. We evaluated dual-energy CTA studies of the lower extremity runoff in 48 patients (16 women; mean age, 63.3 ± 13.8 years) performed on a third-generation dual-source CT system. Images were reconstructed with standard linear blending (F_0.5), VMI+, and traditional monochromatic (VMI) algorithms at 40 to 120 keV in 10-keV intervals. Vascular attenuation and image noise in 18 artery segments were measured; signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated. Five-point scales were used to subjectively evaluate vascular attenuation and image noise. In a subgroup of 21 patients who underwent additional invasive catheter angiography, diagnostic accuracy for the detection of significant stenosis (≥50% lumen restriction) of F_0.5, 50-keV VMI+, and 60-keV VMI data sets were assessed. Objective image quality metrics were highest in the 40- and 50-keV VMI+ series (SNR: 20.2 ± 10.7 and 19.0 ± 9.5, respectively; CNR: 18.5 ± 10.3 and 16.8 ± 9.1, respectively) and were significantly (all P traditional VMI technique and standard linear blending for evaluation of the lower extremity runoff using dual-energy CTA.

  11. The diagnostic value of time-resolved MR angiography with Gadobutrol at 3 T for preoperative evaluation of lower extremity tumors: Comparison with computed tomography angiography

    International Nuclear Information System (INIS)

    Wu, Gang; Jin, Teng; Li, Ting; Li, Xiaoming

    2016-01-01

    To evaluate the diagnostic value of time resolved magnetic resonance angiography with interleaved stochastic trajectory (TWIST) using Gadobutrol for preoperative evaluation of lower extremity tumors. This prospective study was approved by the local Institutional Review Board. 50 consecutive patients (31 men, 19 women, age range 18–80 years, average age 42.7 years) with lower extremity tumors underwent TWIST and computed tomography angiography (CTA). Image quality of TWIST and CTA were evaluated by two radiologists according to a 4-point scale. The degree of arterial stenosis caused by tumor was assessed using TWSIT and CTA separately, and the intra-modality agreement was determined using a kappa test. The number of feeding arteries identified by TWIST was compared with that by CTA using Wilcoxon signed rank test. The ability to identify arterio-venous fistulae (AVF) were compared using a chi-square test. Image quality of TWIST and CTA were rated as 3.88 ± 0.37 and 3.97 ± 0.16, without statistically significant difference (P = 0.135). Intra-modality agreement was excellent for the assessment of arterial stenosis (kappa = 0.806 ± 0.073 for Reader 1, kappa = 0.805 ± 0.073 for Reader 2). Readers identified AVF with TWIST in 27 of 50 cases, and identified AVF with CTA in 14 of 50 (P < 0.001). Mean feeding arteries identified with TWIST was significantly more than that with CTA (2.08 ± 1.72 vs 1.62 ± 1.52, P = 0.02). TWIST is a reliable imaging modality for the assessment of lower extremity tumors. TWIST is comparable to CTA for the identification of AVF and feeding arteries

  12. Developing a New Computer Game Attitude Scale for Taiwanese Early Adolescents

    Science.gov (United States)

    Liu, Eric Zhi-Feng; Lee, Chun-Yi; Chen, Jen-Huang

    2013-01-01

    With ever increasing exposure to computer games, gaining an understanding of the attitudes held by young adolescents toward such activities is crucial; however, few studies have provided scales with which to accomplish this. This study revisited the Computer Game Attitude Scale developed by Chappell and Taylor in 1997, reworking the overall…

  13. Coupled large-eddy simulation and morphodynamics of a large-scale river under extreme flood conditions

    Science.gov (United States)

    Khosronejad, Ali; Sotiropoulos, Fotis; Stony Brook University Team

    2016-11-01

    We present a coupled flow and morphodynamic simulations of extreme flooding in 3 km long and 300 m wide reach of the Mississippi River in Minnesota, which includes three islands and hydraulic structures. We employ the large-eddy simulation (LES) and bed-morphodynamic modules of the VFS-Geophysics model to investigate the flow and bed evolution of the river during a 500 year flood. The coupling of the two modules is carried out via a fluid-structure interaction approach using a nested domain approach to enhance the resolution of bridge scour predictions. The geometrical data of the river, islands and structures are obtained from LiDAR, sub-aqueous sonar and in-situ surveying to construct a digital map of the river bathymetry. Our simulation results for the bed evolution of the river reveal complex sediment dynamics near the hydraulic structures. The numerically captured scour depth near some of the structures reach a maximum of about 10 m. The data-driven simulation strategy we present in this work exemplifies a practical simulation-based-engineering-approach to investigate the resilience of infrastructures to extreme flood events in intricate field-scale riverine systems. This work was funded by a Grant from Minnesota Dept. of Transportation.

  14. Techniques involving extreme environment, nondestructive techniques, computer methods in metals research, and data analysis

    International Nuclear Information System (INIS)

    Bunshah, R.F.

    1976-01-01

    A number of different techniques which range over several different aspects of materials research are covered in this volume. They are concerned with property evaluation of 4 0 K and below, surface characterization, coating techniques, techniques for the fabrication of composite materials, computer methods, data evaluation and analysis, statistical design of experiments and non-destructive test techniques. Topics covered in this part include internal friction measurements; nondestructive testing techniques; statistical design of experiments and regression analysis in metallurgical research; and measurement of surfaces of engineering materials

  15. Computational investigation of two interventions for neck and upper extremity pain in office workers

    DEFF Research Database (Denmark)

    Rasmussen, J.; De Zee, M.

    2010-01-01

    reduce the muscle loads, but a wrist cushion is the more effective intervention type for the vast majority of the muscles. It is concluded that the method can offer useful assistance for design and prescription of efficient interventions for particular ergonomic problems typical for office workers.......This paper reports on novel results derived from a computer model of a typical office work place. We demonstrate how the detailed albeit very small muscle loads can be analyzed and how the effect of two interventions can be assessed using the model. The investigations reveal that both interventions...

  16. Lattice QCD - a challenge in large scale computing

    International Nuclear Information System (INIS)

    Schilling, K.

    1987-01-01

    The computation of the hadron spectrum within the framework of lattice QCD sets a demanding goal for the application of supercomputers in basic science. It requires both big computer capacities and clever algorithms to fight all the numerical evils that one encounters in the Euclidean space-time-world. The talk will attempt to introduce to the present state of the art of spectrum calculations by lattice simulations. (orig.)

  17. Two spatial scales in a bleaching event: Corals from the mildest and the most extreme thermal environments escape mortality

    KAUST Repository

    Pineda, Jesús

    2013-07-28

    In summer 2010, a bleaching event decimated the abundant reef flat coral Stylophora pistillata in some areas of the central Red Sea, where a series of coral reefs 100–300 m wide by several kilometers long extends from the coastline to about 20 km offshore. Mortality of corals along the exposed and protected sides of inner (inshore) and mid and outer (offshore) reefs and in situ and satellite sea surface temperatures (SSTs) revealed that the variability in the mortality event corresponded to two spatial scales of temperature variability: 300 m across the reef flat and 20 km across a series of reefs. However, the relationship between coral mortality and habitat thermal severity was opposite at the two scales. SSTs in summer 2010 were similar or increased modestly (0.5°C) in the outer and mid reefs relative to 2009. In the inner reef, 2010 temperatures were 1.4°C above the 2009 seasonal maximum for several weeks. We detected little or no coral mortality in mid and outer reefs. In the inner reef, mortality depended on exposure. Within the inner reef, mortality was modest on the protected (shoreward) side, the most severe thermal environment, with highest overall mean and maximum temperatures. In contrast, acute mortality was observed in the exposed (seaward) side, where temperature fluctuations and upper water temperature values were relatively less extreme. Refuges to thermally induced coral bleaching may include sites where extreme, high-frequency thermal variability may select for coral holobionts preadapted to, and physiologically condition corals to withstand, regional increases in water temperature.

  18. Extreme Postnatal Scaling in Bat Feeding Performance: A View of Ecomorphology from Ontogenetic and Macroevolutionary Perspectives.

    Science.gov (United States)

    Santana, Sharlene E; Miller, Kimberly E

    2016-09-01

    Ecomorphology studies focus on understanding how anatomical and behavioral diversity result in differences in performance, ecology, and fitness. In mammals, the determinate growth of the skeleton entails that bite performance should change throughout ontogeny until the feeding apparatus attains its adult size and morphology. Then, interspecific differences in adult phenotypes are expected to drive food resource partitioning and patterns of lineage diversification. However, Formal tests of these predictions are lacking for the majority of mammal groups, and thus our understanding of mammalian ecomorphology remains incomplete. By focusing on a fundamental measure of feeding performance, bite force, and capitalizing on the extraordinary morphological and dietary diversity of bats, we discuss how the intersection of ontogenetic and macroevolutionary changes in feeding performance may impact ecological diversity in these mammals. We integrate data on cranial morphology and bite force gathered through longitudinal studies of captive animals and comparative studies of free-ranging individuals. We demonstrate that ontogenetic trajectories and evolutionary changes in bite force are highly dependent on changes in body and head size, and that bats exhibit dramatic, allometric increases in bite force during ontogeny. Interspecific variation in bite force is highly dependent on differences in cranial morphology and function, highlighting selection for ecological specialization. While more research is needed to determine how ontogenetic changes in size and bite force specifically impact food resource use and fitness in bats, interspecific diversity in cranial morphology and bite performance seem to closely match functional differences in diet. Altogether, these results suggest direct ecomorphological relationships at ontogenetic and macroevolutionary scales in bats. © The Author 2016. Published by Oxford University Press on behalf of the Society for Integrative and Comparative

  19. Multi-scale and multi-domain computational astrophysics.

    Science.gov (United States)

    van Elteren, Arjen; Pelupessy, Inti; Zwart, Simon Portegies

    2014-08-06

    Astronomical phenomena are governed by processes on all spatial and temporal scales, ranging from days to the age of the Universe (13.8 Gyr) as well as from kilometre size up to the size of the Universe. This enormous range in scales is contrived, but as long as there is a physical connection between the smallest and largest scales it is important to be able to resolve them all, and for the study of many astronomical phenomena this governance is present. Although covering all these scales is a challenge for numerical modellers, the most challenging aspect is the equally broad and complex range in physics, and the way in which these processes propagate through all scales. In our recent effort to cover all scales and all relevant physical processes on these scales, we have designed the Astrophysics Multipurpose Software Environment (AMUSE). AMUSE is a Python-based framework with production quality community codes and provides a specialized environment to connect this plethora of solvers to a homogeneous problem-solving environment. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  20. Modification of grey scale in computer tomographic images

    International Nuclear Information System (INIS)

    Hemmingsson, A.; Jung, B.

    1980-01-01

    Optimum perception of minute but relevant attenuation differences in CT images often requires display window settings so narrow that a considerable fraction of the image appears completely black or white and consequently without structure. In order to improve the display characteristics two principles of grey scale modification are presented. In one method the pixel contents are displayed unchanged within a selectable attenuation band but moved towards the limits of the band for pixels that are outside it. In the other the grey scale is arranged to a constant number of pixels per grey scale interval. (Auth.)

  1. EMG-Torque correction on Human Upper extremity using Evolutionary Computation

    Science.gov (United States)

    JL, Veronica; Parasuraman, S.; Khan, M. K. A. Ahamed; Jeba DSingh, Kingsly

    2016-09-01

    There have been many studies indicating that control system of rehabilitative robot plays an important role in determining the outcome of the therapy process. Existing works have done the prediction of feedback signal in the controller based on the kinematics parameters and EMG readings of upper limb's skeletal system. Kinematics and kinetics based control signal system is developed by reading the output of the sensors such as position sensor, orientation sensor and F/T (Force/Torque) sensor and there readings are to be compared with the preceding measurement to decide on the amount of assistive force. There are also other works that incorporated the kinematics parameters to calculate the kinetics parameters via formulation and pre-defined assumptions. Nevertheless, these types of control signals analyze the movement of the upper limb only based on the movement of the upper joints. They do not anticipate the possibility of muscle plasticity. The focus of the paper is to make use of the kinematics parameters and EMG readings of skeletal system to predict the individual torque of upper extremity's joints. The surface EMG signals are fed into different mathematical models so that these data can be trained through Genetic Algorithm (GA) to find the best correlation between EMG signals and torques acting on the upper limb's joints. The estimated torque attained from the mathematical models is called simulated output. The simulated output will then be compared with the actual individual joint which is calculated based on the real time kinematics parameters of the upper movement of the skeleton when the muscle cells are activated. The findings from this contribution are extended into the development of the active control signal based controller for rehabilitation robot.

  2. The Great Chains of Computing: Informatics at Multiple Scales

    Directory of Open Access Journals (Sweden)

    Kevin Kirby

    2011-10-01

    Full Text Available The perspective from which information processing is pervasive in the universe has proven to be an increasingly productive one. Phenomena from the quantum level to social networks have commonalities that can be usefully explicated using principles of informatics. We argue that the notion of scale is particularly salient here. An appreciation of what is invariant and what is emergent across scales, and of the variety of different types of scales, establishes a useful foundation for the transdiscipline of informatics. We survey the notion of scale and use it to explore the characteristic features of information statics (data, kinematics (communication, and dynamics (processing. We then explore the analogy to the principles of plenitude and continuity that feature in Western thought, under the name of the "great chain of being", from Plato through Leibniz and beyond, and show that the pancomputational turn is a modern counterpart of this ruling idea. We conclude by arguing that this broader perspective can enhance informatics pedagogy.

  3. Large scale computing in the Energy Research Programs

    International Nuclear Information System (INIS)

    1991-05-01

    The Energy Research Supercomputer Users Group (ERSUG) comprises all investigators using resources of the Department of Energy Office of Energy Research supercomputers. At the December 1989 meeting held at Florida State University (FSU), the ERSUG executive committee determined that the continuing rapid advances in computational sciences and computer technology demanded a reassessment of the role computational science should play in meeting DOE's commitments. Initial studies were to be performed for four subdivisions: (1) Basic Energy Sciences (BES) and Applied Mathematical Sciences (AMS), (2) Fusion Energy, (3) High Energy and Nuclear Physics, and (4) Health and Environmental Research. The first two subgroups produced formal subreports that provided a basis for several sections of this report. Additional information provided in the AMS/BES is included as Appendix C in an abridged form that eliminates most duplication. Additionally, each member of the executive committee was asked to contribute area-specific assessments; these assessments are included in the next section. In the following sections, brief assessments are given for specific areas, a conceptual model is proposed that the entire computational effort for energy research is best viewed as one giant nation-wide computer, and then specific recommendations are made for the appropriate evolution of the system

  4. Molecular Science Computing Facility Scientific Challenges: Linking Across Scales

    Energy Technology Data Exchange (ETDEWEB)

    De Jong, Wibe A.; Windus, Theresa L.

    2005-07-01

    The purpose of this document is to define the evolving science drivers for performing environmental molecular research at the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and to provide guidance associated with the next-generation high-performance computing center that must be developed at EMSL's Molecular Science Computing Facility (MSCF) in order to address this critical research. The MSCF is the pre-eminent computing facility?supported by the U.S. Department of Energy's (DOE's) Office of Biological and Environmental Research (BER)?tailored to provide the fastest time-to-solution for current computational challenges in chemistry and biology, as well as providing the means for broad research in the molecular and environmental sciences. The MSCF provides integral resources and expertise to emerging EMSL Scientific Grand Challenges and Collaborative Access Teams that are designed to leverage the multiple integrated research capabilities of EMSL, thereby creating a synergy between computation and experiment to address environmental molecular science challenges critical to DOE and the nation.

  5. Measuring Students' Writing Ability on a Computer-Analytic Developmental Scale: An Exploratory Validity Study

    Science.gov (United States)

    Burdick, Hal; Swartz, Carl W.; Stenner, A. Jackson; Fitzgerald, Jill; Burdick, Don; Hanlon, Sean T.

    2013-01-01

    The purpose of the study was to explore the validity of a novel computer-analytic developmental scale, the Writing Ability Developmental Scale. On the whole, collective results supported the validity of the scale. It was sensitive to writing ability differences across grades and sensitive to within-grade variability as compared to human-rated…

  6. Final Report Extreme Computing and U.S. Competitiveness DOE Award. DE-FG02-11ER26087/DE-SC0008764

    Energy Technology Data Exchange (ETDEWEB)

    Mustain, Christopher J. [Council on Competitiveness, Washington, DC (United States)

    2016-01-13

    The Council has acted on each of the grant deliverables during the funding period. The deliverables are: (1) convening the Council’s High Performance Computing Advisory Committee (HPCAC) on a bi-annual basis; (2) broadening public awareness of high performance computing (HPC) and exascale developments; (3) assessing the industrial applications of extreme computing; and (4) establishing a policy and business case for an exascale economy.

  7. Computer design of porous active materials at different dimensional scales

    Science.gov (United States)

    Nasedkin, Andrey

    2017-12-01

    The paper presents a mathematical and computer modeling of effective properties of porous piezoelectric materials of three types: with ordinary porosity, with metallized pore surfaces, and with nanoscale porosity structure. The described integrated approach includes the effective moduli method of composite mechanics, simulation of representative volumes, and finite element method.

  8. Conservative treatment of soft tissue sarcomas of the extremities. Functional evaluation with LENT-SOMA scales and the Enneking score

    International Nuclear Information System (INIS)

    Tawfiq, N.; Lagarde, P.; Thomas, L.; Kantor, G.; Stockle, E.; Bui, B.N.

    2000-01-01

    Objective. - The aim of this prospective study is the feasibility of late effects assessment by LENT-SOMA scales after conservative treatment of soft tissue sarcomas of the extremities and a comparison with the functional evaluation by the Enneking score. Patients and methods. - During the systematic follow-up consultations, a series of 32 consecutive patients was evaluated in terms of late effects by LENT SOMA scales and functional results by the Enneking score. The median time after treatment was 65 months. The treatment consisted of conservative surgery (all cases) followed by radiation therapy (29 cases), often combined with adjuvant therapy (12 concomitant radio-chemotherapy association cases out of 14). The assessment of the toxicity was retrospective for acute effects and prospective for the following late tissue damage: skin/subcutaneous tissues, muscles/soft tissues and peripheral nerves. Results. -According to the Enneking score, the global score for the overall series was high (24/30) despite four the scores zero for the psychological acceptance. According to LENT SOMA scales, a low rate of severe sequelae (grade 3-4) was observed. The occurrence of high-grade sequelae and their functional consequences were not correlated with quality of exeresis, dose of radiotherapy or use of concomitant chemotherapy. A complementarity was observed between certain factors of the Enneking score and some criteria of the LENTSOMA scales, especially of muscles/soft tissues. Conclusion. -The good quality of functional results was confirmed by the two mean scoring systems for late normal tissue damage. The routine use of LENT-SOMA seems to be more time consuming than the Enneking score (mean time of scoring: 1 3 versus five minutes). The LENT-SOMA scales are aimed at a detailed description of late toxicity and sequelae while the Enneking score provides a more global evaluation, including the psychological acceptance of treatment. The late effects assessment by the LENT

  9. Microstructural analysis of TRISO particles using multi-scale X-ray computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lowe, T., E-mail: tristan.lowe@manchester.ac.uk [Manchester X-ray Imaging Facility, School of Materials, University of Manchester, M13 9PL (United Kingdom); Bradley, R.S. [Manchester X-ray Imaging Facility, School of Materials, University of Manchester, M13 9PL (United Kingdom); Yue, S. [Manchester X-ray Imaging Facility, School of Materials, University of Manchester, M13 9PL (United Kingdom); The Research Complex at Harwell, Rutherford Appleton Laboratory, Didcot, Oxfordshire OX11 0FA (United Kingdom); Barii, K. [School of Mechanical Engineering, University of Manchester, M13 9PL (United Kingdom); Gelb, J. [Zeiss Xradia Inc., Pleasanton, CA (United States); Rohbeck, N. [Manchester X-ray Imaging Facility, School of Materials, University of Manchester, M13 9PL (United Kingdom); Turner, J. [School of Mechanical Engineering, University of Manchester, M13 9PL (United Kingdom); Withers, P.J. [Manchester X-ray Imaging Facility, School of Materials, University of Manchester, M13 9PL (United Kingdom); The Research Complex at Harwell, Rutherford Appleton Laboratory, Didcot, Oxfordshire OX11 0FA (United Kingdom)

    2015-06-15

    TRISO particles, a composite nuclear fuel built up by ceramic and graphitic layers, have outstanding high temperature resistance. TRISO fuel is the key technology for High Temperature Reactors (HTRs) and the Generation IV Very High Temperature Reactor (VHTR) variant. TRISO offers unparalleled containment of fission products and is extremely robust during accident conditions. An understanding of the thermal performance and mechanical properties of TRISO fuel requires a detailed knowledge of pore sizes, their distribution and interconnectivity. Here 50 nm, nano-, and 1 μm resolution, micro-computed tomography (CT), have been used to quantify non-destructively porosity of a surrogate TRISO particle at the 0.3–10 μm and 3–100 μm scales respectively. This indicates that pore distributions can reliably be measured down to a size approximately 3 times the pixel size which is consistent with the segmentation process. Direct comparison with Scanning Electron Microscopy (SEM) sections indicates that destructive sectioning can introduce significant levels of coarse damage, especially in the pyrolytic carbon layers. Further comparative work is required to identify means of minimizing such damage for SEM studies. Finally since it is non-destructive, multi-scale time-lapse X-ray CT opens the possibility of intermittently tracking the degradation of TRISO structure under thermal cycles or radiation conditions in order to validate models of degradation such as kernel movement. X-ray CT in-situ experimentation of TRISO particles under load and temperature could also be used to understand the internal changes that occur in the particles under accident conditions.

  10. Standardizing Scale Height Computation of Maven Ngims Neutral Data and Variations Between Exobase and Homeopause Scale Heights

    Science.gov (United States)

    Elrod, M. K.; Slipski, M.; Curry, S.; Williamson, H. N.; Benna, M.; Mahaffy, P. R.

    2017-12-01

    The MAVEN NGIMS team produces a level 3 product which includes the computation of Ar scale height an atmospheric temperatures at 200 km. In the latest version (v05_r01) this has been revised to include scale height fits for CO2, N2 O and CO. Members of the MAVEN team have used various methods to compute scale heights leading to significant variations in scale height values depending on fits and techniques within a few orbits even, occasionally, the same pass. Additionally fitting scale heights in a very stable atmosphere like the day side vs night side can have different results based on boundary conditions. Currently, most methods only compute Ar scale heights as it is most stable and reacts least with the instrument. The NGIMS team has chosen to expand these fitting techniques to include fitted scale heights for CO2, N2, CO, and O. Having compared multiple techniques, the method found to be most reliable for most conditions was determined to be a simple fit method. We have focused this to a fitting method that determines the exobase altidude of the CO2 atmosphere as a maximum altitude for the highest point for fitting, and uses the periapsis as the lowest point and then fits the altitude versus log(density). The slope of altitude vs log(density) is -1/H where H is the scale height of the atmosphere for each species. Since this is between the homeopause and the exobase, each species will have a different scale height by this point. This is being released as a new standardization for the level 3 product, with the understanding that scientists and team members will continue to compute more precise scale heights and temperatures as needed based on science and model demands. This is being released in the PDS NGIMS level 3 v05 files for August 2017. Additionally, we are examining these scale heights for variations seasonally, diurnally, and above and below the exobase. The atmosphere is significantly more stable on the dayside than on the nightside. We have also found

  11. Scale-up and optimization of biohydrogen production reactor from laboratory-scale to industrial-scale on the basis of computational fluid dynamics simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xu; Ding, Jie; Guo, Wan-Qian; Ren, Nan-Qi [State Key Laboratory of Urban Water Resource and Environment, Harbin Institute of Technology, 202 Haihe Road, Nangang District, Harbin, Heilongjiang 150090 (China)

    2010-10-15

    The objective of conducting experiments in a laboratory is to gain data that helps in designing and operating large-scale biological processes. However, the scale-up and design of industrial-scale biohydrogen production reactors is still uncertain. In this paper, an established and proven Eulerian-Eulerian computational fluid dynamics (CFD) model was employed to perform hydrodynamics assessments of an industrial-scale continuous stirred-tank reactor (CSTR) for biohydrogen production. The merits of the laboratory-scale CSTR and industrial-scale CSTR were compared and analyzed on the basis of CFD simulation. The outcomes demonstrated that there are many parameters that need to be optimized in the industrial-scale reactor, such as the velocity field and stagnation zone. According to the results of hydrodynamics evaluation, the structure of industrial-scale CSTR was optimized and the results are positive in terms of advancing the industrialization of biohydrogen production. (author)

  12. Cross Validated Temperament Scale Validities Computed Using Profile Similarity Metrics

    Science.gov (United States)

    2017-04-27

    ORGANIZATION NAME(S) AND ADDRESS(ES) U. S. Army Research Institute for the Behavioral & Social Sciences 6000 6TH Street (Bldg. 1464 / Mail...AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) U. S. Army Research Institute for the Behavioral & Social Sciences 6000 6TH...respondent’s scale score is equal to the mean of the non-reversed and recoded-reversed items. Table 1 portrays the conventional scoring algorithm on

  13. Neural Computations in a Dynamical System with Multiple Time Scales

    Directory of Open Access Journals (Sweden)

    Yuanyuan Mi

    2016-09-01

    Full Text Available Neural systems display rich short-term dynamics at various levels, e.g., spike-frequencyadaptation (SFA at single neurons, and short-term facilitation (STF and depression (STDat neuronal synapses. These dynamical features typically covers a broad range of time scalesand exhibit large diversity in different brain regions. It remains unclear what the computationalbenefit for the brain to have such variability in short-term dynamics is. In this study, we proposethat the brain can exploit such dynamical features to implement multiple seemingly contradictorycomputations in a single neural circuit. To demonstrate this idea, we use continuous attractorneural network (CANN as a working model and include STF, SFA and STD with increasing timeconstants in their dynamics. Three computational tasks are considered, which are persistent activity,adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, andhence cannot be implemented by a single dynamical feature or any combination with similar timeconstants. However, with properly coordinated STF, SFA and STD, we show that the network isable to implement the three computational tasks concurrently. We hope this study will shed lighton the understanding of how the brain orchestrates its rich dynamics at various levels to realizediverse cognitive functions.

  14. Evaluation of seabed mapping methods for fine-scale classification of extremely shallow benthic habitats - Application to the Venice Lagoon, Italy

    Science.gov (United States)

    Montereale Gavazzi, G.; Madricardo, F.; Janowski, L.; Kruss, A.; Blondel, P.; Sigovini, M.; Foglini, F.

    2016-03-01

    Recent technological developments of multibeam echosounder systems (MBES) allow mapping of benthic habitats with unprecedented detail. MBES can now be employed in extremely shallow waters, challenging data acquisition (as these instruments were often designed for deeper waters) and data interpretation (honed on datasets with resolution sometimes orders of magnitude lower). With extremely high-resolution bathymetry and co-located backscatter data, it is now possible to map the spatial distribution of fine scale benthic habitats, even identifying the acoustic signatures of single sponges. In this context, it is necessary to understand which of the commonly used segmentation methods is best suited to account for such level of detail. At the same time, new sampling protocols for precisely geo-referenced ground truth data need to be developed to validate the benthic environmental classification. This study focuses on a dataset collected in a shallow (2-10 m deep) tidal channel of the Lagoon of Venice, Italy. Using 0.05-m and 0.2-m raster grids, we compared a range of classifications, both pixel-based and object-based approaches, including manual, Maximum Likelihood Classifier, Jenks Optimization clustering, textural analysis and Object Based Image Analysis. Through a comprehensive and accurately geo-referenced ground truth dataset, we were able to identify five different classes of the substrate composition, including sponges, mixed submerged aquatic vegetation, mixed detritic bottom (fine and coarse) and unconsolidated bare sediment. We computed estimates of accuracy (namely Overall, User, Producer Accuracies and the Kappa statistic) by cross tabulating predicted and reference instances. Overall, pixel based segmentations produced the highest accuracies and the accuracy assessment is strongly dependent on the number of classes chosen for the thematic output. Tidal channels in the Venice Lagoon are extremely important in terms of habitats and sediment distribution

  15. Lower extremity computed tomography angiography can help predict technical success of endovascular revascularization in the superficial femoral and popliteal artery.

    Science.gov (United States)

    Itoga, Nathan K; Kim, Tanner; Sailer, Anna M; Fleischmann, Dominik; Mell, Matthew W

    2017-09-01

    Preprocedural computed tomography angiography (CTA) assists in evaluating vascular morphology and disease distribution and in treatment planning for patients with lower extremity peripheral artery disease (PAD). The aim of the study was to determine the predictive value of radiographic findings on CTA and technical success of endovascular revascularization of occlusions in the superficial femoral artery-popliteal (SFA-pop) region. Medical records and available imaging studies were reviewed for patients undergoing endovascular intervention for PAD between January 2013 and December 2015 at a single academic institution. Radiologists reviewed preoperative CTA scans of patients with occlusions in the SFA-pop region. Radiographic criteria previously used to evaluate chronic occlusions in the coronary arteries were used. Technical success, defined as restoration of inline flow through the SFA-pop region with technical failure (P = .014). Longer lengths of occlusion were also associated with technical failure (P = .042). Multiple occlusions (P = .55), negative remodeling (P = .69), vessel runoff (P = .56), and percentage of vessel calcification (P = .059) were not associated with failure. On multivariable analysis, 100% calcification remained the only significant predictor of technical failure (odds ratio, 9.0; 95% confidence interval, 1.8-45.8; P = .008). Analysis of preoperative CTA shows 100% calcification as the best predictor of technical failure of endovascular revascularization of occlusions in the SFA-pop region. Further studies are needed to determine the cost-effectiveness of obtaining preoperative CTA for lower extremity PAD. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  16. Dominant Large-Scale Atmospheric Circulation Systems for the Extreme Precipitation over the Western Sichuan Basin in Summer 2013

    Directory of Open Access Journals (Sweden)

    Yamin Hu

    2015-01-01

    Full Text Available The western Sichuan Basin (WSB is a rainstorm center influenced by complicated factors such as topography and circulation. Based on multivariable empirical orthogonal function technique for extreme precipitation processes (EPP in WSB in 2013, this study reveals the dominant circulation patterns. Results indicate that the leading modes are characterized by “Saddle” and “Sandwich” structures, respectively. In one mode, a TC from the South China Sea (SCS converts into the inverted trough and steers warm moist airflow northward into the WSB. At the same time, WPSH extends westward over the Yangtze River and conveys a southeasterly warm humid flow. In the other case, WPSH is pushed westward by TC in the Western Pacific and then merges with an anomalous anticyclone over SCS. The anomalous anticyclone and WPSH form a conjunction belt and convey the warm moist southwesterly airflow to meet with the cold flow over the WSB. The configurations of WPSH and TC in the tropic and the blocking and trough in the midhigh latitudes play important roles during the EPPs over the WSB. The persistence of EPPs depends on the long-lived large-scale circulation configuration steady over the suitable positions.

  17. Large-scale computer-mediated training for management teachers

    Directory of Open Access Journals (Sweden)

    Gilly Salmon

    1997-01-01

    Full Text Available In 1995/6 the Open University Business School (OUBS trained 187 tutors in the UK and Continental Western Europe in Computer Mediated Conferencing (CMC for management education. The medium chosen for the training was FirstClassTM. In 1996/7 the OUBS trained a further 106 tutors in FirstClassTM using an improved version of the previous years training. The on line training was based on a previously developed model of learning on line. The model was tested both by means of the structure of the training programme and the improvements made. The training programme was evaluated and revised for the second cohort. Comparison was made between the two training programmes.

  18. Using Amazon's Elastic Compute Cloud to scale CMS' compute hardware dynamically.

    CERN Document Server

    Melo, Andrew Malone

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud-computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely on-demand as limits and caps on usage are imposed. Our trial workflows allow us t...

  19. Maintaining scale as a realiable computational system for criticality safety analysis

    International Nuclear Information System (INIS)

    Bowmann, S.M.; Parks, C.V.; Martin, S.K.

    1995-01-01

    Accurate and reliable computational methods are essential for nuclear criticality safety analyses. The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer code system was originally developed at Oak Ridge National Laboratory (ORNL) to enable users to easily set up and perform criticality safety analyses, as well as shielding, depletion, and heat transfer analyses. Over the fifteen-year life of SCALE, the mainstay of the system has been the criticality safety analysis sequences that have featured the KENO-IV and KENO-V.A Monte Carlo codes and the XSDRNPM one-dimensional discrete-ordinates code. The criticality safety analysis sequences provide automated material and problem-dependent resonance processing for each criticality calculation. This report details configuration management which is essential because SCALE consists of more than 25 computer codes (referred to as modules) that share libraries of commonly used subroutines. Changes to a single subroutine in some cases affect almost every module in SCALE exclamation point Controlled access to program source and executables and accurate documentation of modifications are essential to maintaining SCALE as a reliable code system. The modules and subroutine libraries in SCALE are programmed by a staff of approximately ten Code Managers. The SCALE Software Coordinator maintains the SCALE system and is the only person who modifies the production source, executables, and data libraries. All modifications must be authorized by the SCALE Project Leader prior to implementation

  20. Direct Computation of Sound Radiation by Jet Flow Using Large-scale Equations

    Science.gov (United States)

    Mankbadi, R. R.; Shih, S. H.; Hixon, D. R.; Povinelli, L. A.

    1995-01-01

    Jet noise is directly predicted using large-scale equations. The computational domain is extended in order to directly capture the radiated field. As in conventional large-eddy-simulations, the effect of the unresolved scales on the resolved ones is accounted for. Special attention is given to boundary treatment to avoid spurious modes that can render the computed fluctuations totally unacceptable. Results are presented for a supersonic jet at Mach number 2.1.

  1. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    International Nuclear Information System (INIS)

    Evans, D; Fisk, I; Holzman, B; Pordes, R; Tiradani, A; Melo, A; Sheldon, P; Metson, S

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely 'on-demand' as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the 'base-line' needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  2. An assessment of future computer system needs for large-scale computation

    Science.gov (United States)

    Lykos, P.; White, J.

    1980-01-01

    Data ranging from specific computer capability requirements to opinions about the desirability of a national computer facility are summarized. It is concluded that considerable attention should be given to improving the user-machine interface. Otherwise, increased computer power may not improve the overall effectiveness of the machine user. Significant improvement in throughput requires highly concurrent systems plus the willingness of the user community to develop problem solutions for that kind of architecture. An unanticipated result was the expression of need for an on-going cross-disciplinary users group/forum in order to share experiences and to more effectively communicate needs to the manufacturers.

  3. Elastic Multi-scale Mechanisms: Computation and Biological Evolution.

    Science.gov (United States)

    Diaz Ochoa, Juan G

    2018-01-01

    Explanations based on low-level interacting elements are valuable and powerful since they contribute to identify the key mechanisms of biological functions. However, many dynamic systems based on low-level interacting elements with unambiguous, finite, and complete information of initial states generate future states that cannot be predicted, implying an increase of complexity and open-ended evolution. Such systems are like Turing machines, that overlap with dynamical systems that cannot halt. We argue that organisms find halting conditions by distorting these mechanisms, creating conditions for a constant creativity that drives evolution. We introduce a modulus of elasticity to measure the changes in these mechanisms in response to changes in the computed environment. We test this concept in a population of predators and predated cells with chemotactic mechanisms and demonstrate how the selection of a given mechanism depends on the entire population. We finally explore this concept in different frameworks and postulate that the identification of predictive mechanisms is only successful with small elasticity modulus.

  4. Enabling Wide-Scale Computer Science Education through Improved Automated Assessment Tools

    Science.gov (United States)

    Boe, Bryce A.

    There is a proliferating demand for newly trained computer scientists as the number of computer science related jobs continues to increase. University programs will only be able to train enough new computer scientists to meet this demand when two things happen: when there are more primary and secondary school students interested in computer science, and when university departments have the resources to handle the resulting increase in enrollment. To meet these goals, significant effort is being made to both incorporate computational thinking into existing primary school education, and to support larger university computer science class sizes. We contribute to this effort through the creation and use of improved automated assessment tools. To enable wide-scale computer science education we do two things. First, we create a framework called Hairball to support the static analysis of Scratch programs targeted for fourth, fifth, and sixth grade students. Scratch is a popular building-block language utilized to pique interest in and teach the basics of computer science. We observe that Hairball allows for rapid curriculum alterations and thus contributes to wide-scale deployment of computer science curriculum. Second, we create a real-time feedback and assessment system utilized in university computer science classes to provide better feedback to students while reducing assessment time. Insights from our analysis of student submission data show that modifications to the system configuration support the way students learn and progress through course material, making it possible for instructors to tailor assignments to optimize learning in growing computer science classes.

  5. Computational methods using weighed-extreme learning machine to predict protein self-interactions with protein evolutionary information.

    Science.gov (United States)

    An, Ji-Yong; Zhang, Lei; Zhou, Yong; Zhao, Yu-Jun; Wang, Da-Fu

    2017-08-18

    Self-interactions Proteins (SIPs) is important for their biological activity owing to the inherent interaction amongst their secondary structures or domains. However, due to the limitations of experimental Self-interactions detection, one major challenge in the study of prediction SIPs is how to exploit computational approaches for SIPs detection based on evolutionary information contained protein sequence. In the work, we presented a novel computational approach named WELM-LAG, which combined the Weighed-Extreme Learning Machine (WELM) classifier with Local Average Group (LAG) to predict SIPs based on protein sequence. The major improvement of our method lies in presenting an effective feature extraction method used to represent candidate Self-interactions proteins by exploring the evolutionary information embedded in PSI-BLAST-constructed position specific scoring matrix (PSSM); and then employing a reliable and robust WELM classifier to carry out classification. In addition, the Principal Component Analysis (PCA) approach is used to reduce the impact of noise. The WELM-LAG method gave very high average accuracies of 92.94 and 96.74% on yeast and human datasets, respectively. Meanwhile, we compared it with the state-of-the-art support vector machine (SVM) classifier and other existing methods on human and yeast datasets, respectively. Comparative results indicated that our approach is very promising and may provide a cost-effective alternative for predicting SIPs. In addition, we developed a freely available web server called WELM-LAG-SIPs to predict SIPs. The web server is available at http://219.219.62.123:8888/WELMLAG/ .

  6. Measuring activity limitations in walking : Development of a hierarchical scale for patients with lower-extremity disorders who live at home

    NARCIS (Netherlands)

    Roorda, LD; Roebroeck, ME; van Tilburg, T; Molenaar, IW; Lankhorst, GJ; Bouter, LM

    2005-01-01

    Objective: To develop a hierarchical scale that measures activity limitations in walking in patients with lower-extremity disorders who live at home. Design: Cross-sectional study. Setting: Orthopedic workshops and outpatient clinics of secondary and tertiary care centers. Participants: Patients

  7. Recent hydrological variability and extreme precipitation events in Moroccan Middle-Atlas mountains: micro-scale analyses of lacustrine sediments

    Science.gov (United States)

    Jouve, Guillaume; Vidal, Laurence; Adallal, Rachid; Bard, Edouard; Benkaddour, Abdel; Chapron, Emmanuel; Courp, Thierry; Dezileau, Laurent; Hébert, Bertil; Rhoujjati, Ali; Simonneau, Anaelle; Sonzogni, Corinne; Sylvestre, Florence; Tachikawa, Kazuyo; Viry, Elisabeth

    2016-04-01

    Since the 1990s, the Mediterranean basin undergoes an increase in precipitation events and extreme droughts likely to intensify in the XXI century, and whose origin is attributable to human activities since 1850 (IPCC, 2013). Regional climate models indicate a strengthening of flood episodes at the end of the XXI century in Morocco (Tramblay et al, 2012). To understand recent hydrological and paleohydrological variability in North Africa, our study focuses on the macro- and micro-scale analysis of sedimentary sequences from Lake Azigza (Moroccan Middle Atlas Mountains) covering the last few centuries. This lake is relevant since local site monitoring revealed that lake water table levels were correlated with precipitation regime (Adallal R., PhD Thesis in progress). The aim of our study is to distinguish sedimentary facies characteristic of low and high lake levels, in order to reconstruct past dry and wet periods during the last two hundred years. Here, we present results from sedimentological (lithology, grain size, microstructures under thin sections), geochemical (XRF) and physical (radiography) analyses on short sedimentary cores (64 cm long) taken into the deep basin of Lake Azigza (30 meters water depth). Cores have been dated (radionuclides 210Pb, 137Cs, and 14C dating). Two main facies were distinguished: one organic-rich facies composed of wood fragments, several reworked layers and characterized by Mn peaks; and a second facies composed of terrigenous clastic sediments, without wood nor reworked layers, and characterized by Fe, Ti, Si and K peaks. The first facies is interpreted as a high lake level stand. Indeed, the highest paleoshoreline is close to the vegetation, and steeper banks can increase the current velocity, allowing the transport of wood fragments in case of extreme precipitation events. Mn peaks are interpreted as Mn oxides precipitations under well-oxygenated deep waters after runoff events. The second facies is linked to periods of

  8. Contribution of large-scale circulation anomalies to changes in extreme precipitation frequency in the United States

    Science.gov (United States)

    Lejiang Yu; Shiyuan Zhong; Lisi Pei; Xindi (Randy) Bian; Warren E. Heilman

    2016-01-01

    The mean global climate has warmed as a result of the increasing emission of greenhouse gases induced by human activities. This warming is considered the main reason for the increasing number of extreme precipitation events in the US. While much attention has been given to extreme precipitation events occurring over several days, which are usually responsible for...

  9. Computing the universe: how large-scale simulations illuminate galaxies and dark energy

    Science.gov (United States)

    O'Shea, Brian

    2015-04-01

    High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.

  10. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    OpenAIRE

    Qiang Liu; Yi Qin; Guodong Li

    2018-01-01

    Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal...

  11. Towards an integrated multiscale simulation of turbulent clouds on PetaScale computers

    International Nuclear Information System (INIS)

    Wang Lianping; Ayala, Orlando; Parishani, Hossein; Gao, Guang R; Kambhamettu, Chandra; Li Xiaoming; Rossi, Louis; Orozco, Daniel; Torres, Claudio; Grabowski, Wojciech W; Wyszogrodzki, Andrzej A; Piotrowski, Zbigniew

    2011-01-01

    The development of precipitating warm clouds is affected by several effects of small-scale air turbulence including enhancement of droplet-droplet collision rate by turbulence, entrainment and mixing at the cloud edges, and coupling of mechanical and thermal energies at various scales. Large-scale computation is a viable research tool for quantifying these multiscale processes. Specifically, top-down large-eddy simulations (LES) of shallow convective clouds typically resolve scales of turbulent energy-containing eddies while the effects of turbulent cascade toward viscous dissipation are parameterized. Bottom-up hybrid direct numerical simulations (HDNS) of cloud microphysical processes resolve fully the dissipation-range flow scales but only partially the inertial subrange scales. it is desirable to systematically decrease the grid length in LES and increase the domain size in HDNS so that they can be better integrated to address the full range of scales and their coupling. In this paper, we discuss computational issues and physical modeling questions in expanding the ranges of scales realizable in LES and HDNS, and in bridging LES and HDNS. We review our on-going efforts in transforming our simulation codes towards PetaScale computing, in improving physical representations in LES and HDNS, and in developing better methods to analyze and interpret the simulation results.

  12. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  13. Large-scale simulations with distributed computing: Asymptotic scaling of ballistic deposition

    International Nuclear Information System (INIS)

    Farnudi, Bahman; Vvedensky, Dimitri D

    2011-01-01

    Extensive kinetic Monte Carlo simulations are reported for ballistic deposition (BD) in (1 + 1) dimensions. The large system sizes L observed for the onset of asymptotic scaling (L ≅ 2 12 ) explains the widespread discrepancies in previous reports for exponents of BD in one and likely in higher dimensions. The exponents obtained directly from our simulations, α = 0.499 ± 0.004 and β = 0.336 ± 0.004, capture the exact values α = 1/2 and β = 1/3 for the one-dimensional Kardar-Parisi-Zhang equation. An analysis of our simulations suggests a criterion for identifying the onset of true asymptotic scaling, which enables a more informed evaluation of exponents for BD in higher dimensions. These simulations were made possible by the Simulation through Social Networking project at the Institute for Advanced Studies in Basic Sciences in 2007, which was re-launched in November 2010.

  14. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2018-05-01

    Full Text Available Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal computer, a Graphics Processing Unit (GPU-based, high-performance computing method using the OpenACC application was adopted to parallelize the shallow water model. An unstructured data management method was presented to control the data transportation between the GPU and CPU (Central Processing Unit with minimum overhead, and then both computation and data were offloaded from the CPU to the GPU, which exploited the computational capability of the GPU as much as possible. The parallel model was validated using various benchmarks and real-world case studies. The results demonstrate that speed-ups of up to one order of magnitude can be achieved in comparison with the serial model. The proposed parallel model provides a fast and reliable tool with which to quickly assess flood hazards in large-scale areas and, thus, has a bright application prospect for dynamic inundation risk identification and disaster assessment.

  15. A computational comparison of theory and practice of scale intonation in Byzantine chant

    DEFF Research Database (Denmark)

    Panteli, Maria; Purwins, Hendrik

    2013-01-01

    Byzantine Chant performance practice is quantitatively compared to the Chrysanthine theory. The intonation of scale degrees is quantified, based on pitch class profiles. An analysis procedure is introduced that consists of the following steps: 1) Pitch class histograms are calculated via non-parametric...... kernel smoothing. 2) Histogram peaks are detected. 3) Phrase ending analysis aids the finding of the tonic to align histogram peaks. 4) The theoretical scale degrees are mapped to the practical ones. 5) A schema of statistical tests detects significant deviations of theoretical scale tuning from...... the estimated ones in performance practice. The analysis of 94 echoi shows a tendency of the singer to level theoretic particularities of the echos that stand out of the general norm in the octoechos: theoretically extremely large scale steps are diminished in performance....

  16. Automatic computation of moment magnitudes for small earthquakes and the scaling of local to moment magnitude

    Science.gov (United States)

    Edwards, Benjamin; Allmann, Bettina; Fäh, Donat; Clinton, John

    2010-10-01

    Moment magnitudes (MW) are computed for small and moderate earthquakes using a spectral fitting method. 40 of the resulting values are compared with those from broadband moment tensor solutions and found to match with negligible offset and scatter for available MW values of between 2.8 and 5.0. Using the presented method, MW are computed for 679 earthquakes in Switzerland with a minimum ML = 1.3. A combined bootstrap and orthogonal L1 minimization is then used to produce a scaling relation between ML and MW. The scaling relation has a polynomial form and is shown to reduce the dependence of the predicted MW residual on magnitude relative to an existing linear scaling relation. The computation of MW using the presented spectral technique is fully automated at the Swiss Seismological Service, providing real-time solutions within 10 minutes of an event through a web-based XML database. The scaling between ML and MW is explored using synthetic data computed with a stochastic simulation method. It is shown that the scaling relation can be explained by the interaction of attenuation, the stress-drop and the Wood-Anderson filter. For instance, it is shown that the stress-drop controls the saturation of the ML scale, with low-stress drops (e.g. 0.1-1.0 MPa) leading to saturation at magnitudes as low as ML = 4.

  17. Tuneable resolution as a systems biology approach for multi-scale, multi-compartment computational models.

    Science.gov (United States)

    Kirschner, Denise E; Hunt, C Anthony; Marino, Simeone; Fallahi-Sichani, Mohammad; Linderman, Jennifer J

    2014-01-01

    The use of multi-scale mathematical and computational models to study complex biological processes is becoming increasingly productive. Multi-scale models span a range of spatial and/or temporal scales and can encompass multi-compartment (e.g., multi-organ) models. Modeling advances are enabling virtual experiments to explore and answer questions that are problematic to address in the wet-lab. Wet-lab experimental technologies now allow scientists to observe, measure, record, and analyze experiments focusing on different system aspects at a variety of biological scales. We need the technical ability to mirror that same flexibility in virtual experiments using multi-scale models. Here we present a new approach, tuneable resolution, which can begin providing that flexibility. Tuneable resolution involves fine- or coarse-graining existing multi-scale models at the user's discretion, allowing adjustment of the level of resolution specific to a question, an experiment, or a scale of interest. Tuneable resolution expands options for revising and validating mechanistic multi-scale models, can extend the longevity of multi-scale models, and may increase computational efficiency. The tuneable resolution approach can be applied to many model types, including differential equation, agent-based, and hybrid models. We demonstrate our tuneable resolution ideas with examples relevant to infectious disease modeling, illustrating key principles at work. © 2014 The Authors. WIREs Systems Biology and Medicine published by Wiley Periodicals, Inc.

  18. Reliability and validity of the Persian lower extremity functional scale (LEFS) in a heterogeneous sample of outpatients with lower limb musculoskeletal disorders.

    Science.gov (United States)

    Negahban, Hossein; Hessam, Masumeh; Tabatabaei, Saeid; Salehi, Reza; Sohani, Soheil Mansour; Mehravar, Mohammad

    2014-01-01

    The aim was to culturally translate and validate the Persian lower extremity functional scale (LEFS) in a heterogeneous sample of outpatients with lower extremity musculoskeletal disorders (n = 304). This is a prospective methodological study. After a standard forward-backward translation, psychometric properties were assessed in terms of test-retest reliability, internal consistency, construct validity, dimensionality, and ceiling or floor effects. The acceptable level of intraclass correlation coefficient >0.70 and Cronbach's alpha coefficient >0.70 was obtained for the Persian LEFS. Correlations between Persian LEFS and Short-Form 36 Health Survey (SF-36) subscales of Physical Health component (rs range = 0.38-0.78) were higher than correlations between Persian LEFS and SF-36 subscales of Mental Health component (rs range = 0.15-0.39). A corrected item--total correlation of >0.40 (Spearman's rho) was obtained for all items of the Persian LEFS. Horn's parallel analysis detected a total of two factors. No ceiling or floor effects were detected for the Persian LEFS. The Persian version of the LEFS is a reliable and valid instrument that can be used to measure functional status in Persian-speaking patients with different musculoskeletal disorders of the lower extremity. Implications for Rehabilitation The Persian lower extremity functional scale (LEFS) is a reliable, internally consistent and valid instrument, with no ceiling or floor effects, to determine functional status of heterogeneous patients with musculoskeletal disorders of the lower extremity. The Persian version of the LEFS can be used in clinical and research settings to measure function in Iranian patients with different musculoskeletal disorders of the lower extremity.

  19. Performing three-dimensional neutral particle transport calculations on tera scale computers

    International Nuclear Information System (INIS)

    Woodward, C.S.; Brown, P.N.; Chang, B.; Dorr, M.R.; Hanebutte, U.R.

    1999-01-01

    A scalable, parallel code system to perform neutral particle transport calculations in three dimensions is presented. To utilize the hyper-cluster architecture of emerging tera scale computers, the parallel code successfully combines the MPI message passing and paradigms. The code's capabilities are demonstrated by a shielding calculation containing over 14 billion unknowns. This calculation was accomplished on the IBM SP ''ASCI-Blue-Pacific computer located at Lawrence Livermore National Laboratory (LLNL)

  20. A review of parallel computing for large-scale remote sensing image mosaicking

    OpenAIRE

    Chen, Lajiao; Ma, Yan; Liu, Peng; Wei, Jingbo; Jie, Wei; He, Jijun

    2015-01-01

    Interest in image mosaicking has been spurred by a wide variety of research and management needs. However, for large-scale applications, remote sensing image mosaicking usually requires significant computational capabilities. Several studies have attempted to apply parallel computing to improve image mosaicking algorithms and to speed up calculation process. The state of the art of this field has not yet been summarized, which is, however, essential for a better understanding and for further ...

  1. The multilevel fast multipole algorithm (MLFMA) for solving large-scale computational electromagnetics problems

    CERN Document Server

    Ergul, Ozgur

    2014-01-01

    The Multilevel Fast Multipole Algorithm (MLFMA) for Solving Large-Scale Computational Electromagnetic Problems provides a detailed and instructional overview of implementing MLFMA. The book: Presents a comprehensive treatment of the MLFMA algorithm, including basic linear algebra concepts, recent developments on the parallel computation, and a number of application examplesCovers solutions of electromagnetic problems involving dielectric objects and perfectly-conducting objectsDiscusses applications including scattering from airborne targets, scattering from red

  2. Optimization and large scale computation of an entropy-based moment closure

    Science.gov (United States)

    Kristopher Garrett, C.; Hauck, Cory; Hill, Judith

    2015-12-01

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  3. A SUB-GRID VOLUME-OF-FLUIDS (VOF) MODEL FOR MIXING IN RESOLVED SCALE AND IN UNRESOLVED SCALE COMPUTATIONS

    International Nuclear Information System (INIS)

    Vold, Erik L.; Scannapieco, Tony J.

    2007-01-01

    A sub-grid mix model based on a volume-of-fluids (VOF) representation is described for computational simulations of the transient mixing between reactive fluids, in which the atomically mixed components enter into the reactivity. The multi-fluid model allows each fluid species to have independent values for density, energy, pressure and temperature, as well as independent velocities and volume fractions. Fluid volume fractions are further divided into mix components to represent their 'mixedness' for more accurate prediction of reactivity. Time dependent conversion from unmixed volume fractions (denoted cf) to atomically mixed (af) fluids by diffusive processes is represented in resolved scale simulations with the volume fractions (cf, af mix). In unresolved scale simulations, the transition to atomically mixed materials begins with a conversion from unmixed material to a sub-grid volume fraction (pf). This fraction represents the unresolved small scales in the fluids, heterogeneously mixed by turbulent or multi-phase mixing processes, and this fraction then proceeds in a second step to the atomically mixed fraction by diffusion (cf, pf, af mix). Species velocities are evaluated with a species drift flux, ρ i u di = ρ i (u i -u), used to describe the fluid mixing sources in several closure options. A simple example of mixing fluids during 'interfacial deceleration mixing with a small amount of diffusion illustrates the generation of atomically mixed fluids in two cases, for resolved scale simulations and for unresolved scale simulations. Application to reactive mixing, including Inertial Confinement Fusion (ICF), is planned for future work.

  4. An accurate and computationally efficient small-scale nonlinear FEA of flexible risers

    OpenAIRE

    Rahmati, MT; Bahai, H; Alfano, G

    2016-01-01

    This paper presents a highly efficient small-scale, detailed finite-element modelling method for flexible risers which can be effectively implemented in a fully-nested (FE2) multiscale analysis based on computational homogenisation. By exploiting cyclic symmetry and applying periodic boundary conditions, only a small fraction of a flexible pipe is used for a detailed nonlinear finite-element analysis at the small scale. In this model, using three-dimensional elements, all layer components are...

  5. Multi-scale data visualization for computational astrophysics and climate dynamics at Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Ahern, Sean; Daniel, Jamison R; Gao, Jinzhu; Ostrouchov, George; Toedte, Ross J; Wang, Chaoli

    2006-01-01

    Computational astrophysics and climate dynamics are two principal application foci at the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL). We identify a dataset frontier that is shared by several SciDAC computational science domains and present an exploration of traditional production visualization techniques enhanced with new enabling research technologies such as advanced parallel occlusion culling and high resolution small multiples statistical analysis. In collaboration with our research partners, these techniques will allow the visual exploration of a new generation of peta-scale datasets that cross this data frontier along all axes

  6. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  7. A highly scalable particle tracking algorithm using partitioned global address space (PGAS) programming for extreme-scale turbulence simulations

    Science.gov (United States)

    Buaria, D.; Yeung, P. K.

    2017-12-01

    A new parallel algorithm utilizing a partitioned global address space (PGAS) programming model to achieve high scalability is reported for particle tracking in direct numerical simulations of turbulent fluid flow. The work is motivated by the desire to obtain Lagrangian information necessary for the study of turbulent dispersion at the largest problem sizes feasible on current and next-generation multi-petaflop supercomputers. A large population of fluid particles is distributed among parallel processes dynamically, based on instantaneous particle positions such that all of the interpolation information needed for each particle is available either locally on its host process or neighboring processes holding adjacent sub-domains of the velocity field. With cubic splines as the preferred interpolation method, the new algorithm is designed to minimize the need for communication, by transferring between adjacent processes only those spline coefficients determined to be necessary for specific particles. This transfer is implemented very efficiently as a one-sided communication, using Co-Array Fortran (CAF) features which facilitate small data movements between different local partitions of a large global array. The cost of monitoring transfer of particle properties between adjacent processes for particles migrating across sub-domain boundaries is found to be small. Detailed benchmarks are obtained on the Cray petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign. For operations on the particles in a 81923 simulation (0.55 trillion grid points) on 262,144 Cray XE6 cores, the new algorithm is found to be orders of magnitude faster relative to a prior algorithm in which each particle is tracked by the same parallel process at all times. This large speedup reduces the additional cost of tracking of order 300 million particles to just over 50% of the cost of computing the Eulerian velocity field at this scale. Improving support of PGAS models on

  8. Quantitative analysis of scaling error compensation methods in dimensional X-ray computed tomography

    DEFF Research Database (Denmark)

    Müller, P.; Hiller, Jochen; Dai, Y.

    2015-01-01

    X-ray Computed Tomography (CT) has become an important technology for quality control of industrial components. As with other technologies, e.g., tactile coordinate measurements or optical measurements, CT is influenced by numerous quantities which may have negative impact on the accuracy...... errors of the manipulator system (magnification axis). This article also introduces a new compensation method for scaling errors using a database of reference scaling factors and discusses its advantages and disadvantages. In total, three methods for the correction of scaling errors – using the CT ball...

  9. Large scale inverse problems computational methods and applications in the earth sciences

    CERN Document Server

    Scheichl, Robert; Freitag, Melina A; Kindermann, Stefan

    2013-01-01

    This book is thesecond volume of three volume series recording the ""Radon Special Semester 2011 on Multiscale Simulation & Analysis in Energy and the Environment"" taking place in Linz, Austria, October 3-7, 2011. The volume addresses the common ground in the mathematical and computational procedures required for large-scale inverse problems and data assimilation in forefront applications.

  10. Large-scale computer networks and the future of legal knowledge-based systems

    NARCIS (Netherlands)

    Leenes, R.E.; Svensson, Jorgen S.; Hage, J.C.; Bench-Capon, T.J.M.; Cohen, M.J.; van den Herik, H.J.

    1995-01-01

    In this paper we investigate the relation between legal knowledge-based systems and large-scale computer networks such as the Internet. On the one hand, researchers of legal knowledge-based systems have claimed huge possibilities, but despite the efforts over the last twenty years, the number of

  11. Electronic cleansing for computed tomography (CT) colonography using a scale-invariant three-material model

    NARCIS (Netherlands)

    Serlie, Iwo W. O.; Vos, Frans M.; Truyen, Roel; Post, Frits H.; Stoker, Jaap; van Vliet, Lucas J.

    2010-01-01

    A well-known reading pitfall in computed tomography (CT) colonography is posed by artifacts at T-junctions, i.e., locations where air-fluid levels interface with the colon wall. This paper presents a scale-invariant method to determine material fractions in voxels near such T-junctions. The proposed

  12. CT crown for on-machine scale calibration in Computed Tomography

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; De Chiffre, Leonardo

    2016-01-01

    A novel artefact for on-machine calibration of the scale in 3D X-ray Computed Tomography (CT) is presented. The artefact comprises an invar disc on which several reference ruby spheres are positioned at different heights using carbon fibre rods. The artefact is positioned and scanned together...

  13. Large-scale computation in solid state physics - Recent developments and prospects

    International Nuclear Information System (INIS)

    DeVreese, J.T.

    1985-01-01

    During the past few years an increasing interest in large-scale computation is developing. Several initiatives were taken to evaluate and exploit the potential of ''supercomputers'' like the CRAY-1 (or XMP) or the CYBER-205. In the U.S.A., there first appeared the Lax report in 1982 and subsequently (1984) the National Science Foundation in the U.S.A. announced a program to promote large-scale computation at the universities. Also, in Europe several CRAY- and CYBER-205 systems have been installed. Although the presently available mainframes are the result of a continuous growth in speed and memory, they might have induced a discontinuous transition in the evolution of the scientific method; between theory and experiment a third methodology, ''computational science'', has become or is becoming operational

  14. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    Science.gov (United States)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  15. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Wasserman, Harvey

    2011-03-31

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility supporting research within the Department of Energy's Office of Science. NERSC provides high-performance computing (HPC) resources to approximately 4,000 researchers working on about 400 projects. In addition to hosting large-scale computing facilities, NERSC provides the support and expertise scientists need to effectively and efficiently use HPC systems. In February 2010, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR) and DOE's Office of Basic Energy Sciences (BES) held a workshop to characterize HPC requirements for BES research through 2013. The workshop was part of NERSC's legacy of anticipating users future needs and deploying the necessary resources to meet these demands. Workshop participants reached a consensus on several key findings, in addition to achieving the workshop's goal of collecting and characterizing computing requirements. The key requirements for scientists conducting research in BES are: (1) Larger allocations of computational resources; (2) Continued support for standard application software packages; (3) Adequate job turnaround time and throughput; and (4) Guidance and support for using future computer architectures. This report expands upon these key points and presents others. Several 'case studies' are included as significant representative samples of the needs of science teams within BES. Research teams scientific goals, computational methods of solution, current and 2013 computing requirements, and special software and support needs are summarized in these case studies. Also included are researchers strategies for computing in the highly parallel, 'multi-core' environment that is expected to dominate HPC architectures over the next few years. NERSC has strategic plans and initiatives already underway that address key workshop findings. This report includes a

  16. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    Science.gov (United States)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  17. Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations

    Science.gov (United States)

    Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele

    2016-12-01

    Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.

  18. Front-end vision and multi-scale image analysis multi-scale computer vision theory and applications, written in Mathematica

    CERN Document Server

    Romeny, Bart M Haar

    2008-01-01

    Front-End Vision and Multi-Scale Image Analysis is a tutorial in multi-scale methods for computer vision and image processing. It builds on the cross fertilization between human visual perception and multi-scale computer vision (`scale-space') theory and applications. The multi-scale strategies recognized in the first stages of the human visual system are carefully examined, and taken as inspiration for the many geometric methods discussed. All chapters are written in Mathematica, a spectacular high-level language for symbolic and numerical manipulations. The book presents a new and effective

  19. Potential changes in the extreme climate conditions at the regional scale: from observed data to modelling approaches and towards probabilistic climate change information

    International Nuclear Information System (INIS)

    Gachon, P.; Radojevic, M.; Harding, A.; Saad, C.; Nguyen, V.T.V.

    2008-01-01

    The changes in the characteristics of extreme climate conditions are one of the most critical challenges for all ecosystems, human being and infrastructure, in the context of the on-going global climate change. However, extremes information needed for impacts studies cannot be obtained directly from coarse scale global climate models (GCMs), due mainly to their difficulties to incorporate regional scale feedbacks and processes responsible in part for the occurrence, intensity and duration of extreme events. Downscaling approaches, namely statistical and dynamical downscaling techniques (i.e. SD and RCM), have emerged as useful tools to develop high resolution climate change information, in particular for extremes, as those are theoretically more capable to take into account regional/local forcings and their feedbacks from large scale influences as they are driven with GCM synoptic variables. Nevertheless, in spite of the potential added values from downscaling methods (statistical and dynamical), a rigorous assessment of these methods are needed as inherent difficulties to simulate extremes are still present. In this paper, different series of RCM and SD simulations using three different GCMs are presented and evaluated with respect to observed values over the current period and over a river basin in southern Quebec, with future ensemble runs, i.e. centered over 2050s (i.e. 2041-2070 period using the SRES A2 emission scenario). Results suggest that the downscaling performance over the baseline period significantly varies between the two downscaling techniques and over various seasons with more regular reliable simulated values with SD technique for temperature than for RCM runs, while both approaches produced quite similar temperature changes in the future from median values with more divergence for extremes. For precipitation, less accurate information is obtained compared to observed data, and with more differences among models with higher uncertainties in the

  20. Advanced computational workflow for the multi-scale modeling of the bone metabolic processes.

    Science.gov (United States)

    Dao, Tien Tuan

    2017-06-01

    Multi-scale modeling of the musculoskeletal system plays an essential role in the deep understanding of complex mechanisms underlying the biological phenomena and processes such as bone metabolic processes. Current multi-scale models suffer from the isolation of sub-models at each anatomical scale. The objective of this present work was to develop a new fully integrated computational workflow for simulating bone metabolic processes at multi-scale levels. Organ-level model employs multi-body dynamics to estimate body boundary and loading conditions from body kinematics. Tissue-level model uses finite element method to estimate the tissue deformation and mechanical loading under body loading conditions. Finally, cell-level model includes bone remodeling mechanism through an agent-based simulation under tissue loading. A case study on the bone remodeling process located on the human jaw was performed and presented. The developed multi-scale model of the human jaw was validated using the literature-based data at each anatomical level. Simulation outcomes fall within the literature-based ranges of values for estimated muscle force, tissue loading and cell dynamics during bone remodeling process. This study opens perspectives for accurately simulating bone metabolic processes using a fully integrated computational workflow leading to a better understanding of the musculoskeletal system function from multiple length scales as well as to provide new informative data for clinical decision support and industrial applications.

  1. Effort-reward imbalance and one-year change in neck-shoulder and upper extremity pain among call center computer operators.

    Science.gov (United States)

    Krause, Niklas; Burgel, Barbara; Rempel, David

    2010-01-01

    The literature on psychosocial job factors and musculoskeletal pain is inconclusive in part due to insufficient control for confounding by biomechanical factors. The aim of this study was to investigate prospectively the independent effects of effort-reward imbalance (ERI) at work on regional musculoskeletal pain of the neck and upper extremities of call center operators after controlling for (i) duration of computer use both at work and at home, (ii) ergonomic workstation design, (iii) physical activities during leisure time, and (iv) other individual worker characteristics. This was a one-year prospective study among 165 call center operators who participated in a randomized ergonomic intervention trial that has been described previously. Over an approximate four-week period, we measured ERI and 28 potential confounders via a questionnaire at baseline. Regional upper-body pain and computer use was measured by weekly surveys for up to 12 months following the implementation of ergonomic interventions. Regional pain change scores were calculated as the difference between average weekly pain scores pre- and post intervention. A significant relationship was found between high average ERI ratios and one-year increases in right upper-extremity pain after adjustment for pre-intervention regional mean pain score, current and past physical workload, ergonomic workstation design, and anthropometric, sociodemographic, and behavioral risk factors. No significant associations were found with change in neck-shoulder or left upper-extremity pain. This study suggests that ERI predicts regional upper-extremity pain in -computer operators working >or=20 hours per week. Control for physical workload and ergonomic workstation design was essential for identifying ERI as a risk factor.

  2. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files

  3. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.

  4. A multiple-scaling method of the computation of threaded structures

    International Nuclear Information System (INIS)

    Andrieux, S.; Leger, A.

    1989-01-01

    The numerical computation of threaded structures usually leads to very large finite elements problems. It was therefore very difficult to carry out some parametric studies, especially in non-linear cases involving plasticity or unilateral contact conditions. Nevertheless, these parametric studies are essential in many industrial problems, for instance for the evaluation of various repairing processes of the closure studs of PWR. It is well known that such repairing generally involves several modifications of the thread geometry, of the number of active threads, of the flange clamping conditions, and so on. This paper is devoted to the description of a two-scale method, which easily allows parametric studies. The main idea of this method consists of dividing the problem into a global part, and a local part. The local problem is solved by F.E.M. on the precise geometry of the thread of some elementary loadings. The global one is formulated on the gudgeon scale and is reduced to a monodimensional one. The resolution of this global problem leads to the unsignificant computational cost. Then, a post-processing gives the stress field at the thread scale anywhere in the assembly. After recalling some principles of the two-scales approach, the method is described. The validation by comparison with a direct F.E. computation and some further applications are presented

  5. Quasistatic zooming of FDTD E-field computations: the impact of down-scaling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Van de Kamer, J.B.; Kroeze, H.; De Leeuw, A.A.C.; Lagendijk, J.J.W. [Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht (Netherlands)

    2001-05-01

    Due to current computer limitations, regional hyperthermia treatment planning (HTP) is practically limited to a resolution of 1 cm, whereas a millimetre resolution is desired. Using the centimetre resolution E-vector-field distribution, computed with, for example, the finite-difference time-domain (FDTD) method and the millimetre resolution patient anatomy it is possible to obtain a millimetre resolution SAR distribution in a volume of interest (VOI) by means of quasistatic zooming. To compute the required low-resolution E-vector-field distribution, a low-resolution dielectric geometry is needed which is constructed by down-scaling the millimetre resolution dielectric geometry. In this study we have investigated which down-scaling technique results in a dielectric geometry that yields the best low-resolution E-vector-field distribution as input for quasistatic zooming. A segmented 2 mm resolution CT data set of a patient has been down-scaled to 1 cm resolution using three different techniques: 'winner-takes-all', 'volumetric averaging' and 'anisotropic volumetric averaging'. The E-vector-field distributions computed for those low-resolution dielectric geometries have been used as input for quasistatic zooming. The resulting zoomed-resolution SAR distributions were compared with a reference: the 2 mm resolution SAR distribution computed with the FDTD method. The E-vector-field distribution for both a simple phantom and the complex partial patient geometry down-scaled using 'anisotropic volumetric averaging' resulted in zoomed-resolution SAR distributions that best approximate the corresponding high-resolution SAR distribution (correlation 97, 96% and absolute averaged difference 6, 14% respectively). (author)

  6. Application of parallel computing techniques to a large-scale reservoir simulation

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2001-01-01

    Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance

  7. [The research on bidirectional reflectance computer simulation of forest canopy at pixel scale].

    Science.gov (United States)

    Song, Jin-Ling; Wang, Jin-Di; Shuai, Yan-Min; Xiao, Zhi-Qiang

    2009-08-01

    Computer simulation is based on computer graphics to generate the realistic 3D structure scene of vegetation, and to simulate the canopy regime using radiosity method. In the present paper, the authors expand the computer simulation model to simulate forest canopy bidirectional reflectance at pixel scale. But usually, the trees are complex structures, which are tall and have many branches. So there is almost a need for hundreds of thousands or even millions of facets to built up the realistic structure scene for the forest It is difficult for the radiosity method to compute so many facets. In order to make the radiosity method to simulate the forest scene at pixel scale, in the authors' research, the authors proposed one idea to simplify the structure of forest crowns, and abstract the crowns to ellipsoids. And based on the optical characteristics of the tree component and the characteristics of the internal energy transmission of photon in real crown, the authors valued the optical characteristics of ellipsoid surface facets. In the computer simulation of the forest, with the idea of geometrical optics model, the gap model is considered to get the forest canopy bidirectional reflectance at pixel scale. Comparing the computer simulation results with the GOMS model, and Multi-angle Imaging SpectroRadiometer (MISR) multi-angle remote sensing data, the simulation results are in agreement with the GOMS simulation result and MISR BRF. But there are also some problems to be solved. So the authors can conclude that the study has important value for the application of multi-angle remote sensing and the inversion of vegetation canopy structure parameters.

  8. Automatic computation of moment magnitudes for small earthquakes and the scaling of local to moment magnitude

    OpenAIRE

    Edwards, Benjamin; Allmann, Bettina; Fäh, Donat; Clinton, John

    2017-01-01

    Moment magnitudes (MW) are computed for small and moderate earthquakes using a spectral fitting method. 40 of the resulting values are compared with those from broadband moment tensor solutions and found to match with negligible offset and scatter for available MW values of between 2.8 and 5.0. Using the presented method, MW are computed for 679 earthquakes in Switzerland with a minimum ML= 1.3. A combined bootstrap and orthogonal L1 minimization is then used to produce a scaling relation bet...

  9. Large-scale theoretical calculations in molecular science - design of a large computer system for molecular science and necessary conditions for future computers

    Energy Technology Data Exchange (ETDEWEB)

    Kashiwagi, H [Institute for Molecular Science, Okazaki, Aichi (Japan)

    1982-06-01

    A large computer system was designed and established for molecular science under the leadership of molecular scientists. Features of the computer system are an automated operation system and an open self-service system. Large-scale theoretical calculations have been performed to solve many problems in molecular science, using the computer system. Necessary conditions for future computers are discussed on the basis of this experience.

  10. Large-scale theoretical calculations in molecular science - design of a large computer system for molecular science and necessary conditions for future computers

    International Nuclear Information System (INIS)

    Kashiwagi, H.

    1982-01-01

    A large computer system was designed and established for molecular science under the leadership of molecular scientists. Features of the computer system are an automated operation system and an open self-service system. Large-scale theoretical calculations have been performed to solve many problems in molecular science, using the computer system. Necessary conditions for future computers are discussed on the basis of this experience. (orig.)

  11. Multi Scale Finite Element Analyses By Using SEM-EBSD Crystallographic Modeling and Parallel Computing

    International Nuclear Information System (INIS)

    Nakamachi, Eiji

    2005-01-01

    A crystallographic homogenization procedure is introduced to the conventional static-explicit and dynamic-explicit finite element formulation to develop a multi scale - double scale - analysis code to predict the plastic strain induced texture evolution, yield loci and formability of sheet metal. The double-scale structure consists of a crystal aggregation - micro-structure - and a macroscopic elastic plastic continuum. At first, we measure crystal morphologies by using SEM-EBSD apparatus, and define a unit cell of micro structure, which satisfy the periodicity condition in the real scale of polycrystal. Next, this crystallographic homogenization FE code is applied to 3N pure-iron and 'Benchmark' aluminum A6022 polycrystal sheets. It reveals that the initial crystal orientation distribution - the texture - affects very much to a plastic strain induced texture and anisotropic hardening evolutions and sheet deformation. Since, the multi-scale finite element analysis requires a large computation time, a parallel computing technique by using PC cluster is developed for a quick calculation. In this parallelization scheme, a dynamic workload balancing technique is introduced for quick and efficient calculations

  12. A new way of estimating compute-boundedness and its application to dynamic voltage scaling

    DEFF Research Database (Denmark)

    Venkatachalam, Vasanth; Franz, Michael; Probst, Christian W.

    2007-01-01

    Many dynamic voltage scaling algorithms rely on measuring hardware events (such as cache misses) for predicting how much a workload can be slowed down with acceptable performance loss. The events measured, however, are at best indirectly related to execution time and clock frequency. By relating...... these two indicators logically, we propose a new way of predicting a workload's compute-boundedness that is based on direct observation, and only requires measuring the total execution cycles for the two highest clock frequencies. Our predictor can be used to develop dynamic voltage scaling algorithms...

  13. Multi-scale computation methods: Their applications in lithium-ion battery research and development

    International Nuclear Information System (INIS)

    Shi Siqi; Zhao Yan; Wu Qu; Gao Jian; Liu Yue; Ju Wangwei; Ouyang Chuying; Xiao Ruijuan

    2016-01-01

    Based upon advances in theoretical algorithms, modeling and simulations, and computer technologies, the rational design of materials, cells, devices, and packs in the field of lithium-ion batteries is being realized incrementally and will at some point trigger a paradigm revolution by combining calculations and experiments linked by a big shared database, enabling accelerated development of the whole industrial chain. Theory and multi-scale modeling and simulation, as supplements to experimental efforts, can help greatly to close some of the current experimental and technological gaps, as well as predict path-independent properties and help to fundamentally understand path-independent performance in multiple spatial and temporal scales. (topical review)

  14. Advancing nanoelectronic device modeling through peta-scale computing and deployment on nanoHUB

    International Nuclear Information System (INIS)

    Haley, Benjamin P; Luisier, Mathieu; Klimeck, Gerhard; Lee, Sunhee; Ryu, Hoon; Bae, Hansang; Saied, Faisal; Clark, Steve

    2009-01-01

    Recent improvements to existing HPC codes NEMO 3-D and OMEN, combined with access to peta-scale computing resources, have enabled realistic device engineering simulations that were previously infeasible. NEMO 3-D can now simulate 1 billion atom systems, and, using 3D spatial decomposition, scale to 32768 cores. Simulation time for the band structure of an experimental P doped Si quantum computing device fell from 40 minutes to 1 minute. OMEN can perform fully quantum mechanical transport calculations for real-word UTB FETs on 147,456 cores in roughly 5 minutes. Both of these tools power simulation engines on the nanoHUB, giving the community access to previously unavailable research capabilities.

  15. Extreme robustness of scaling in sample space reducing processes explains Zipf’s law in diffusion on directed networks

    International Nuclear Information System (INIS)

    Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan

    2016-01-01

    It has been shown recently that a specific class of path-dependent stochastic processes, which reduce their sample space as they unfold, lead to exact scaling laws in frequency and rank distributions. Such sample space reducing processes offer an alternative new mechanism to understand the emergence of scaling in countless processes. The corresponding power law exponents were shown to be related to noise levels in the process. Here we show that the emergence of scaling is not limited to the simplest SSRPs, but holds for a huge domain of stochastic processes that are characterised by non-uniform prior distributions. We demonstrate mathematically that in the absence of noise the scaling exponents converge to −1 (Zipf’s law) for almost all prior distributions. As a consequence it becomes possible to fully understand targeted diffusion on weighted directed networks and its associated scaling laws in node visit distributions. The presence of cycles can be properly interpreted as playing the same role as noise in SSRPs and, accordingly, determine the scaling exponents. The result that Zipf’s law emerges as a generic feature of diffusion on networks, regardless of its details, and that the exponent of visiting times is related to the amount of cycles in a network could be relevant for a series of applications in traffic-, transport- and supply chain management. (paper)

  16. A Multi-Scale Computational Study on the Mechanism of Streptococcus pneumoniae Nicotinamidase (SpNic)

    OpenAIRE

    Ion, Bogdan; Kazim, Erum; Gauld, James

    2014-01-01

    Nicotinamidase (Nic) is a key zinc-dependent enzyme in NAD metabolism that catalyzes the hydrolysis of nicotinamide to give nicotinic acid. A multi-scale computational approach has been used to investigate the catalytic mechanism, substrate binding and roles of active site residues of Nic from Streptococcus pneumoniae (SpNic). In particular, density functional theory (DFT), molecular dynamics (MD) and ONIOM quantum mechanics/molecular mechanics (QM/MM) methods have been employed. The o...

  17. Cerebral methodology based computing to estimate real phenomena from large-scale nuclear simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2011-01-01

    Our final goal is to estimate real phenomena from large-scale nuclear simulations by using computing processes. Large-scale simulations mean that they include scale variety and physical complexity so that corresponding experiments and/or theories do not exist. In nuclear field, it is indispensable to estimate real phenomena from simulations in order to improve the safety and security of nuclear power plants. Here, the analysis of uncertainty included in simulations is needed to reveal sensitivity of uncertainty due to randomness, to reduce the uncertainty due to lack of knowledge and to lead a degree of certainty by verification and validation (V and V) and uncertainty quantification (UQ) processes. To realize this, we propose 'Cerebral Methodology based Computing (CMC)' as computing processes with deductive and inductive approaches by referring human reasoning processes. Our idea is to execute deductive and inductive simulations contrasted with deductive and inductive approaches. We have established its prototype system and applied it to a thermal displacement analysis of a nuclear power plant. The result shows that our idea is effective to reduce the uncertainty and to get the degree of certainty. (author)

  18. ADVANCING THE FUNDAMENTAL UNDERSTANDING AND SCALE-UP OF TRISO FUEL COATERS VIA ADVANCED MEASUREMENT AND COMPUTATIONAL TECHNIQUES

    Energy Technology Data Exchange (ETDEWEB)

    Biswas, Pratim; Al-Dahhan, Muthanna

    2012-11-01

    Tri-isotropic (TRISO) fuel particle coating is critical for the future use of nuclear energy produced byadvanced gas reactors (AGRs). The fuel kernels are coated using chemical vapor deposition in a spouted fluidized bed. The challenges encountered in operating TRISO fuel coaters are due to the fact that in modern AGRs, such as High Temperature Gas Reactors (HTGRs), the acceptable level of defective/failed coated particles is essentially zero. This specification requires processes that produce coated spherical particles with even coatings having extremely low defect fractions. Unfortunately, the scale-up and design of the current processes and coaters have been based on empirical approaches and are operated as black boxes. Hence, a voluminous amount of experimental development and trial and error work has been conducted. It has been clearly demonstrated that the quality of the coating applied to the fuel kernels is impacted by the hydrodynamics, solids flow field, and flow regime characteristics of the spouted bed coaters, which themselves are influenced by design parameters and operating variables. Further complicating the outlook for future fuel-coating technology and nuclear energy production is the fact that a variety of new concepts will involve fuel kernels of different sizes and with compositions of different densities. Therefore, without a fundamental understanding the underlying phenomena of the spouted bed TRISO coater, a significant amount of effort is required for production of each type of particle with a significant risk of not meeting the specifications. This difficulty will significantly and negatively impact the applications of AGRs for power generation and cause further challenges to them as an alternative source of commercial energy production. Accordingly, the proposed work seeks to overcome such hurdles and advance the scale-up, design, and performance of TRISO fuel particle spouted bed coaters. The overall objectives of the proposed work are

  19. Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.

  20. Large-scale particle simulations in a virtual-memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Wagner, J.S.; Tajima, T.; Million, R.

    1982-08-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceed the computer core size. The required address space is automatically mapped onto slow disc memory by the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Accesses to slow memory significantly reduce the execution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time

  1. Auto-Scaling of Geo-Based Image Processing in an OpenStack Cloud Computing Environment

    OpenAIRE

    Sanggoo Kang; Kiwon Lee

    2016-01-01

    Cloud computing is a base platform for the distribution of large volumes of data and high-performance image processing on the Web. Despite wide applications in Web-based services and their many benefits, geo-spatial applications based on cloud computing technology are still developing. Auto-scaling realizes automatic scalability, i.e., the scale-out and scale-in processing of virtual servers in a cloud computing environment. This study investigates the applicability of auto-scaling to geo-bas...

  2. Scale interactions in economics: application to the evaluation of the economic damages of climatic change and of extreme events

    International Nuclear Information System (INIS)

    Hallegatte, S.

    2005-06-01

    Growth models, which neglect economic disequilibria, considered as temporary, are in general used to evaluate the damaging effects generated by climatic change. This work shows, through a series of modeling experiences, the importance of disequilibria and of endogenous variability of economy in the evaluation of damages due to extreme events and climatic change. It demonstrates the impossibility to separate the evaluation of damages from the representation of growth and of economic dynamics: the comfort losses will depend on both the nature and intensity of impacts and on the dynamics and situation of the economy to which they will apply. Thus, the uncertainties about the damaging effects of future climatic changes come from both scientific uncertainties and from uncertainties about the future organization of our economies. (J.S.)

  3. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  4. Scaling law for noise variance and spatial resolution in differential phase contrast computed tomography

    International Nuclear Information System (INIS)

    Chen Guanghong; Zambelli, Joseph; Li Ke; Bevins, Nicholas; Qi Zhihua

    2011-01-01

    Purpose: The noise variance versus spatial resolution relationship in differential phase contrast (DPC) projection imaging and computed tomography (CT) are derived and compared to conventional absorption-based x-ray projection imaging and CT. Methods: The scaling law for DPC-CT is theoretically derived and subsequently validated with phantom results from an experimental Talbot-Lau interferometer system. Results: For the DPC imaging method, the noise variance in the differential projection images follows the same inverse-square law with spatial resolution as in conventional absorption-based x-ray imaging projections. However, both in theory and experimental results, in DPC-CT the noise variance scales with spatial resolution following an inverse linear relationship with fixed slice thickness. Conclusions: The scaling law in DPC-CT implies a lesser noise, and therefore dose, penalty for moving to higher spatial resolutions when compared to conventional absorption-based CT in order to maintain the same contrast-to-noise ratio.

  5. Extreme value statistics for annual minimum and trough-under-treshold precipitation at different, spatio-temporal scales

    NARCIS (Netherlands)

    Booij, Martijn J.; de Wit, Marcel J.M.

    2010-01-01

    The aim of this paper is to quantify meteorological droughts and assign return periods to these droughts. Moreover, the relation between meteorological and hydrological droughts is explored. This has been done for the River Meuse basin in Western Europe at different spatial and temporal scales to

  6. Physical and mechanical metallurgy of zirconium alloys for nuclear applications: a multi-scale computational study

    Energy Technology Data Exchange (ETDEWEB)

    Glazoff, Michael Vasily [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-10-01

    In the post-Fukushima world, the stability of materials under extreme conditions is an important issue for the safety of nuclear reactors. Because the nuclear industry is going to continue using advanced zirconium cladding materials in the foreseeable future, it become critical to gain fundamental understanding of the several interconnected problems. First, what are the thermodynamic and kinetic factors affecting the oxidation and hydrogen pick-up by these materials at normal, off-normal conditions, and in long-term storage? Secondly, what protective coatings (if any) could be used in order to gain extremely valuable time at off-normal conditions, e.g., when temperature exceeds the critical value of 2200°F? Thirdly, the kinetics of oxidation of such protective coating or braiding needs to be quantified. Lastly, even if some degree of success is achieved along this path, it is absolutely critical to have automated inspection algorithms allowing identifying defects of cladding as soon as possible. This work strives to explore these interconnected factors from the most advanced computational perspective, utilizing such modern techniques as first-principles atomistic simulations, computational thermodynamics of materials, diffusion modeling, and the morphological algorithms of image processing for defect identification. Consequently, it consists of the four parts dealing with these four problem areas preceded by the introduction and formulation of the studied problems. In the 1st part an effort was made to employ computational thermodynamics and ab initio calculations to shed light upon the different stages of oxidation of ziraloys (2 and 4), the role of microstructure optimization in increasing their thermal stability, and the process of hydrogen pick-up, both in normal working conditions and in long-term storage. The 2nd part deals with the need to understand the influence and respective roles of the two different plasticity mechanisms in Zr nuclear alloys: twinning

  7. Computational Fluid Dynamics Study on the Effects of RATO Timing on the Scale Model Acoustic Test

    Science.gov (United States)

    Nielsen, Tanner; Williams, B.; West, Jeff

    2015-01-01

    The Scale Model Acoustic Test (SMAT) is a 5% scale test of the Space Launch System (SLS), which is currently being designed at Marshall Space Flight Center (MSFC). The purpose of this test is to characterize and understand a variety of acoustic phenomena that occur during the early portions of lift off, one being the overpressure environment that develops shortly after booster ignition. The SLS lift off configuration consists of four RS-25 liquid thrusters on the core stage, with two solid boosters connected to each side. Past experience with scale model testing at MSFC (in ER42), has shown that there is a delay in the ignition of the Rocket Assisted Take Off (RATO) motor, which is used as the 5% scale analog of the solid boosters, after the signal to ignite is given. This delay can range from 0 to 16.5ms. While this small of a delay maybe insignificant in the case of the full scale SLS, it can significantly alter the data obtained during the SMAT due to the much smaller geometry. The speed of sound of the air and combustion gas constituents is not scaled, and therefore the SMAT pressure waves propagate at approximately the same speed as occurs during full scale. However, the SMAT geometry is much smaller allowing the pressure waves to move down the exhaust duct, through the trench, and impact the vehicle model much faster than occurs at full scale. To better understand the effect of the RATO timing simultaneity on the SMAT IOP test data, a computational fluid dynamics (CFD) analysis was performed using the Loci/CHEM CFD software program. Five different timing offsets, based on RATO ignition delay statistics, were simulated. A variety of results and comparisons will be given, assessing the overall effect of RATO timing simultaneity on the SMAT overpressure environment.

  8. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  9. Randomized Approaches for Nearest Neighbor Search in Metric Space When Computing the Pairwise Distance Is Extremely Expensive

    Science.gov (United States)

    Wang, Lusheng; Yang, Yong; Lin, Guohui

    Finding the closest object for a query in a database is a classical problem in computer science. For some modern biological applications, computing the similarity between two objects might be very time consuming. For example, it takes a long time to compute the edit distance between two whole chromosomes and the alignment cost of two 3D protein structures. In this paper, we study the nearest neighbor search problem in metric space, where the pair-wise distance between two objects in the database is known and we want to minimize the number of distances computed on-line between the query and objects in the database in order to find the closest object. We have designed two randomized approaches for indexing metric space databases, where objects are purely described by their distances with each other. Analysis and experiments show that our approaches only need to compute O(logn) objects in order to find the closest object, where n is the total number of objects in the database.

  10. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    Science.gov (United States)

    Cotes-Ruiz, Iván Tomás; Prado, Rocío P; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás

    2017-01-01

    Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  11. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    Directory of Open Access Journals (Sweden)

    Iván Tomás Cotes-Ruiz

    Full Text Available Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS. The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  12. Field limit and nano-scale surface topography of superconducting radio-frequency cavity made of extreme type II superconductor

    OpenAIRE

    Kubo, Takayuki

    2014-01-01

    The field limit of superconducting radio-frequency cavity made of type II superconductor with a large Ginzburg-Landau parameter is studied with taking effects of nano-scale surface topography into account. If the surface is ideally flat, the field limit is imposed by the superheating field. On the surface of cavity, however, nano-defects almost continuously distribute and suppress the superheating field everywhere. The field limit is imposed by an effective superheating field given by the pro...

  13. Removal of volatile organic compounds at extreme shock-loading using a scaled-up pilot rotating drum biofilter.

    Science.gov (United States)

    Sawvel, Russell A; Kim, Byung; Alvarez, Pedro J J

    2008-11-01

    A pilot-scale rotating drum biofilter (RDB), which is a novel biofilter design that offers flexible flow-through configurations, was used to treat complex and variable volatile organic compound (VOC) emissions, including shock loadings, emanating from paint drying operations at an Army ammunition plant. The RDB was seeded with municipal wastewater activated sludge. Removal efficiencies up to 86% and an elimination capacity of 5.3 g chemical oxygen demand (COD) m(-3) hr(-1) were achieved at a filter-medium contact time of 60 sec. Efficiency increased at higher temperatures that promote higher biological activity, and decreased at lower pH, which dropped down to pH 5.5 possibly as a result of carbon dioxide and volatile fatty acid production and ammonia consumption during VOC degradation. In comparison, other studies have shown that a bench-scale RDB could achieve a removal efficiency of 95% and elimination capacity of 331 g COD m(-3) hr(-1). Sustainable performance of the pilot-scale RDB was challenged by the intermittent nature of painting operations, which typically resulted in 3-day long shutdown periods when bacteria were not fed. This challenge was overcome by adding sucrose (2 g/L weekly) as an auxiliary substrate to sustain metabolic activity during shutdown periods.

  14. Extreme-Scale Algorithms & Software Resilience (EASIR) Architecture-Aware Algorithms for Scalable Performance and Resilience on Heterogeneous Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Demmel, James W. [Univ. of California, Berkeley, CA (United States)

    2017-09-14

    This project addresses both communication-avoiding algorithms, and reproducible floating-point computation. Communication, i.e. moving data, either between levels of memory or processors over a network, is much more expensive per operation than arithmetic (measured in time or energy), so we seek algorithms that greatly reduce communication. We developed many new algorithms for both dense and sparse, and both direct and iterative linear algebra, attaining new communication lower bounds, and getting large speedups in many cases. We also extended this work in several ways: (1) We minimize writes separately from reads, since writes may be much more expensive than reads on emerging memory technologies, like Flash, sometimes doing asymptotically fewer writes than reads. (2) We extend the lower bounds and optimal algorithms to arbitrary algorithms that may be expressed as perfectly nested loops accessing arrays, where the array subscripts may be arbitrary affine functions of the loop indices (eg A(i), B(i,j+k, k+3*m-7, …) etc.). (3) We extend our communication-avoiding approach to some machine learning algorithms, such as support vector machines. This work has won a number of awards. We also address reproducible floating-point computation. We define reproducibility to mean getting bitwise identical results from multiple runs of the same program, perhaps with different hardware resources or other changes that should ideally not change the answer. Many users depend on reproducibility for debugging or correctness. However, dynamic scheduling of parallel computing resources, combined with nonassociativity of floating point addition, makes attaining reproducibility a challenge even for simple operations like summing a vector of numbers, or more complicated operations like the Basic Linear Algebra Subprograms (BLAS). We describe an algorithm that computes a reproducible sum of floating point numbers, independent of the order of summation. The algorithm depends only on a

  15. Characterization of the Scale Model Acoustic Test Overpressure Environment using Computational Fluid Dynamics

    Science.gov (United States)

    Nielsen, Tanner; West, Jeff

    2015-01-01

    The Scale Model Acoustic Test (SMAT) is a 5% scale test of the Space Launch System (SLS), which is currently being designed at Marshall Space Flight Center (MSFC). The purpose of this test is to characterize and understand a variety of acoustic phenomena that occur during the early portions of lift off, one being the overpressure environment that develops shortly after booster ignition. The pressure waves that propagate from the mobile launcher (ML) exhaust hole are defined as the ignition overpressure (IOP), while the portion of the pressure waves that exit the duct or trench are the duct overpressure (DOP). Distinguishing the IOP and DOP in scale model test data has been difficult in past experiences and in early SMAT results, due to the effects of scaling the geometry. The speed of sound of the air and combustion gas constituents is not scaled, and therefore the SMAT pressure waves propagate at approximately the same speed as occurs in full scale. However, the SMAT geometry is twenty times smaller, allowing the pressure waves to move down the exhaust hole, through the trench and duct, and impact the vehicle model much faster than occurs at full scale. The DOP waves impact portions of the vehicle at the same time as the IOP waves, making it difficult to distinguish the different waves and fully understand the data. To better understand the SMAT data, a computational fluid dynamics (CFD) analysis was performed with a fictitious geometry that isolates the IOP and DOP. The upper and lower portions of the domain were segregated to accomplish the isolation in such a way that the flow physics were not significantly altered. The Loci/CHEM CFD software program was used to perform this analysis.

  16. FOX: A Fault-Oblivious Extreme-Scale Execution Environment Boston University Final Report Project Number: DE-SC0005365

    Energy Technology Data Exchange (ETDEWEB)

    Appavoo, Jonathan [Boston Univ., MA (United States)

    2013-03-17

    Exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today's machines. Systems software for exascale machines must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massive data analysis in a highly unreliable hardware environment with billions of threads of execution. The FOX project explored systems software and runtime support for a new approach to the data and work distribution for fault oblivious application execution. Our major OS work at Boston University focused on developing a new light-weight operating systems model that provides an appropriate context for both multi-core and multi-node application development. This work is discussed in section 1. Early on in the FOX project BU developed infrastructure for prototyping dynamic HPC environments in which the sets of nodes that an application is run on can be dynamically grown or shrunk. This work was an extension of the Kittyhawk project and is discussed in section 2. Section 3 documents the publications and software repositories that we have produced. To put our work in context of the complete FOX project contribution we include in section 4 an extended version of a paper that documents the complete work of the FOX team.

  17. Clinical signs and physical function in neck and upper extremities among elderly female computer users: the NEW study

    DEFF Research Database (Denmark)

    Juul-Kristensen, B; Kadefors, R; Hansen, K

    2006-01-01

    -reported neck/shoulder trouble have more clinical findings than those not reporting trouble, and that a corresponding pattern holds true for physical function. In total 42 and 61 questionnaire-defined NS cases and NS controls participated and went through a clinical examination of the neck and upper extremities...... and five physical function tests: maximal voluntary contraction (MVC) of shoulder elevation, abduction, and handgrip, as well as endurance at 30% MVC shoulder elevation and a physical performance test. Based on clinical signs and symptoms, trapezius myalgia (38%), tension neck syndrome (17......%) and cervicalgia (17%) were the most frequent diagnoses among NS cases, and were significantly more frequent among NS cases than NS controls. A total of 60% of the subjects with reported trouble had one or several of the diagnoses located in the neck/shoulder. Physical function of the shoulder was lower...

  18. Large-scale simulations of error-prone quantum computation devices

    International Nuclear Information System (INIS)

    Trieu, Doan Binh

    2009-01-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2±0.2) x 10 -6 . For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431±0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced technology, i

  19. Large-scale simulations of error-prone quantum computation devices

    Energy Technology Data Exchange (ETDEWEB)

    Trieu, Doan Binh

    2009-07-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2{+-}0.2) x 10{sup -6}. For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431{+-}0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced

  20. Multi-Agent System Supporting Automated Large-Scale Photometric Computations

    Directory of Open Access Journals (Sweden)

    Adam Sȩdziwy

    2016-02-01

    Full Text Available The technologies related to green energy, smart cities and similar areas being dynamically developed in recent years, face frequently problems of a computational nature rather than a technological one. The example is the ability of accurately predicting the weather conditions for PV farms or wind turbines. Another group of issues is related to the complexity of the computations required to obtain an optimal setup of a solution being designed. In this article, we present the case representing the latter group of problems, namely designing large-scale power-saving lighting installations. The term “large-scale” refers to an entire city area, containing tens of thousands of luminaires. Although a simple power reduction for a single street, giving limited savings, is relatively easy, it becomes infeasible for tasks covering thousands of luminaires described by precise coordinates (instead of simplified layouts. To overcome this critical issue, we propose introducing a formal representation of a computing problem and applying a multi-agent system to perform design-related computations in parallel. The important measure introduced in the article indicating optimization progress is entropy. It also allows for terminating optimization when the solution is satisfying. The article contains the results of real-life calculations being made with the help of the presented approach.

  1. Computational Fluid Dynamics for nuclear applications: from CFD to multi-scale CMFD

    Energy Technology Data Exchange (ETDEWEB)

    Yadigaroglu, G. [Swiss Federal Institute of Technology-Zurich (ETHZ), Nuclear Engineering Laboratory, ETH-Zentrum, CLT CH-8092 Zurich (Switzerland)]. E-mail: yadi@ethz.ch

    2005-02-01

    New trends in computational methods for nuclear reactor thermal-hydraulics are discussed; traditionally, these have been based on the two-fluid model. Although CFD computations for single phase flows are commonplace, Computational Multi-Fluid Dynamics (CMFD) is still under development. One-fluid methods coupled with interface tracking techniques provide interesting opportunities and enlarge the scope of problems that can be solved. For certain problems, one may have to conduct 'cascades' of computations at increasingly finer scales to resolve all issues. The case study of condensation of steam/air mixtures injected from a downward-facing vent into a pool of water and a proposed CMFD initiative to numerically model Critical Heat Flux (CHF) illustrate such cascades. For the venting problem, a variety of tools are used: a system code for system behaviour; an interface-tracking method (Volume of Fluid, VOF) to examine the behaviour of large bubbles; direct-contact condensation can be treated either by Direct Numerical Simulation (DNS) or by analytical methods.

  2. Computational Fluid Dynamics for nuclear applications: from CFD to multi-scale CMFD

    International Nuclear Information System (INIS)

    Yadigaroglu, G.

    2005-01-01

    New trends in computational methods for nuclear reactor thermal-hydraulics are discussed; traditionally, these have been based on the two-fluid model. Although CFD computations for single phase flows are commonplace, Computational Multi-Fluid Dynamics (CMFD) is still under development. One-fluid methods coupled with interface tracking techniques provide interesting opportunities and enlarge the scope of problems that can be solved. For certain problems, one may have to conduct 'cascades' of computations at increasingly finer scales to resolve all issues. The case study of condensation of steam/air mixtures injected from a downward-facing vent into a pool of water and a proposed CMFD initiative to numerically model Critical Heat Flux (CHF) illustrate such cascades. For the venting problem, a variety of tools are used: a system code for system behaviour; an interface-tracking method (Volume of Fluid, VOF) to examine the behaviour of large bubbles; direct-contact condensation can be treated either by Direct Numerical Simulation (DNS) or by analytical methods

  3. Development and application of a computer model for large-scale flame acceleration experiments

    International Nuclear Information System (INIS)

    Marx, K.D.

    1987-07-01

    A new computational model for large-scale premixed flames is developed and applied to the simulation of flame acceleration experiments. The primary objective is to circumvent the necessity for resolving turbulent flame fronts; this is imperative because of the relatively coarse computational grids which must be used in engineering calculations. The essence of the model is to artificially thicken the flame by increasing the appropriate diffusivities and decreasing the combustion rate, but to do this in such a way that the burn velocity varies with pressure, temperature, and turbulence intensity according to prespecified phenomenological characteristics. The model is particularly aimed at implementation in computer codes which simulate compressible flows. To this end, it is applied to the two-dimensional simulation of hydrogen-air flame acceleration experiments in which the flame speeds and gas flow velocities attain or exceed the speed of sound in the gas. It is shown that many of the features of the flame trajectories and pressure histories in the experiments are simulated quite well by the model. Using the comparison of experimental and computational results as a guide, some insight is developed into the processes which occur in such experiments. 34 refs., 25 figs., 4 tabs

  4. Large Scale Beam-beam Simulations for the CERN LHC using Distributed Computing

    CERN Document Server

    Herr, Werner; McIntosh, E; Schmidt, F

    2006-01-01

    We report on a large scale simulation of beam-beam effects for the CERN Large Hadron Collider (LHC). The stability of particles which experience head-on and long-range beam-beam effects was investigated for different optical configurations and machine imperfections. To cover the interesting parameter space required computing resources not available at CERN. The necessary resources were available in the LHC@home project, based on the BOINC platform. At present, this project makes more than 60000 hosts available for distributed computing. We shall discuss our experience using this system during a simulation campaign of more than six months and describe the tools and procedures necessary to ensure consistent results. The results from this extended study are presented and future plans are discussed.

  5. HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation

    Science.gov (United States)

    Sterling, Thomas; Bergman, Larry

    2000-01-01

    Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention

  6. High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing

    Science.gov (United States)

    Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.

    2015-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.

  7. Large-scale computational drug repositioning to find treatments for rare diseases.

    Science.gov (United States)

    Govindaraj, Rajiv Gandhi; Naderi, Misagh; Singha, Manali; Lemoine, Jeffrey; Brylinski, Michal

    2018-01-01

    Rare, or orphan, diseases are conditions afflicting a small subset of people in a population. Although these disorders collectively pose significant health care problems, drug companies require government incentives to develop drugs for rare diseases due to extremely limited individual markets. Computer-aided drug repositioning, i.e., finding new indications for existing drugs, is a cheaper and faster alternative to traditional drug discovery offering a promising venue for orphan drug research. Structure-based matching of drug-binding pockets is among the most promising computational techniques to inform drug repositioning. In order to find new targets for known drugs ultimately leading to drug repositioning, we recently developed e MatchSite, a new computer program to compare drug-binding sites. In this study, e MatchSite is combined with virtual screening to systematically explore opportunities to reposition known drugs to proteins associated with rare diseases. The effectiveness of this integrated approach is demonstrated for a kinase inhibitor, which is a confirmed candidate for repositioning to synapsin Ia. The resulting dataset comprises 31,142 putative drug-target complexes linked to 980 orphan diseases. The modeling accuracy is evaluated against the structural data recently released for tyrosine-protein kinase HCK. To illustrate how potential therapeutics for rare diseases can be identified, we discuss a possibility to repurpose a steroidal aromatase inhibitor to treat Niemann-Pick disease type C. Overall, the exhaustive exploration of the drug repositioning space exposes new opportunities to combat orphan diseases with existing drugs. DrugBank/Orphanet repositioning data are freely available to research community at https://osf.io/qdjup/.

  8. Computational Fluid Dynamics Modeling Of Scaled Hanford Double Shell Tank Mixing - CFD Modeling Sensitivity Study Results

    International Nuclear Information System (INIS)

    Jackson, V.L.

    2011-01-01

    The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.

  9. Full-color large-scaled computer-generated holograms using RGB color filters.

    Science.gov (United States)

    Tsuchiyama, Yasuhiro; Matsushima, Kyoji

    2017-02-06

    A technique using RGB color filters is proposed for creating high-quality full-color computer-generated holograms (CGHs). The fringe of these CGHs is composed of more than a billion pixels. The CGHs reconstruct full-parallax three-dimensional color images with a deep sensation of depth caused by natural motion parallax. The simulation technique as well as the principle and challenges of high-quality full-color reconstruction are presented to address the design of filter properties suitable for large-scaled CGHs. Optical reconstructions of actual fabricated full-color CGHs are demonstrated in order to verify the proposed techniques.

  10. A computationally inexpensive CFD approach for small-scale biomass burners equipped with enhanced air staging

    International Nuclear Information System (INIS)

    Buchmayr, M.; Gruber, J.; Hargassner, M.; Hochenauer, C.

    2016-01-01

    Highlights: • Time efficient CFD model to predict biomass boiler performance. • Boundary conditions for numerical modeling are provided by measurements. • Tars in the product from primary combustion was considered. • Simulation results were validated by experiments on a real-scale reactor. • Very good accordance between experimental and simulation results. - Abstract: Computational Fluid Dynamics (CFD) is an upcoming technique for optimization and as a part of the design process of biomass combustion systems. An accurate simulation of biomass combustion can only be provided with high computational effort so far. This work presents an accurate, time efficient CFD approach for small-scale biomass combustion systems equipped with enhanced air staging. The model can handle the high amount of biomass tars in the primary combustion product at very low primary air ratios. Gas-phase combustion in the freeboard was performed by the Steady Flamelet Model (SFM) together with a detailed heptane combustion mechanism. The advantage of the SFM is that complex combustion chemistry can be taken into account at low computational effort because only two additional transport equations have to be solved to describe the chemistry in the reacting flow. Boundary conditions for primary combustion product composition were obtained from the fuel bed by experiments. The fuel bed data were used as fuel inlet boundary condition for the gas-phase combustion model. The numerical and experimental investigations were performed for different operating conditions and varying wood-chip moisture on a special designed real-scale reactor. The numerical predictions were validated with experimental results and a very good agreement was found. With the presented approach accurate results can be provided within 24 h using a standard Central Processing Unit (CPU) consisting of six cores. Case studies e.g. for combustion geometry improvement can be realized effectively due to the short calculation

  11. Multi-scale computation methods: Their applications in lithium-ion battery research and development

    Science.gov (United States)

    Siqi, Shi; Jian, Gao; Yue, Liu; Yan, Zhao; Qu, Wu; Wangwei, Ju; Chuying, Ouyang; Ruijuan, Xiao

    2016-01-01

    Based upon advances in theoretical algorithms, modeling and simulations, and computer technologies, the rational design of materials, cells, devices, and packs in the field of lithium-ion batteries is being realized incrementally and will at some point trigger a paradigm revolution by combining calculations and experiments linked by a big shared database, enabling accelerated development of the whole industrial chain. Theory and multi-scale modeling and simulation, as supplements to experimental efforts, can help greatly to close some of the current experimental and technological gaps, as well as predict path-independent properties and help to fundamentally understand path-independent performance in multiple spatial and temporal scales. Project supported by the National Natural Science Foundation of China (Grant Nos. 51372228 and 11234013), the National High Technology Research and Development Program of China (Grant No. 2015AA034201), and Shanghai Pujiang Program, China (Grant No. 14PJ1403900).

  12. Analysis of a computational benchmark for a high-temperature reactor using SCALE

    International Nuclear Information System (INIS)

    Goluoglu, S.

    2006-01-01

    Several proposed advanced reactor concepts require methods to address effects of double heterogeneity. In doubly heterogeneous systems, heterogeneous fuel particles in a moderator matrix form the fuel region of the fuel element and thus constitute the first level of heterogeneity. Fuel elements themselves are also heterogeneous with fuel and moderator or reflector regions, forming the second level of heterogeneity. The fuel elements may also form regular or irregular lattices. A five-phase computational benchmark for a high-temperature reactor (HTR) fuelled with uranium or reactor-grade plutonium has been defined by the Organization for Economic Cooperation and Development, Nuclear Energy Agency (OECD NEA), Nuclear Science Committee, Working Party on the Physics of Plutonium Fuels and Innovative Fuel Cycles. This paper summarizes the analysis results using the latest SCALE code system (to be released in CY 2006 as SCALE 5.1). (authors)

  13. Relations between work and upper extremity musculoskeletal problems (UEMSP) and the moderating role of psychosocial work factors on the relation between computer work and UEMSP.

    Science.gov (United States)

    Nicolakakis, Nektaria; Stock, Susan R; Abrahamowicz, Michal; Kline, Rex; Messing, Karen

    2017-11-01

    Computer work has been identified as a risk factor for upper extremity musculoskeletal problems (UEMSP). But few studies have investigated how psychosocial and organizational work factors affect this relation. Nor have gender differences in the relation between UEMSP and these work factors  been studied. We sought to estimate: (1) the association between UEMSP and a range of physical, psychosocial and organizational work exposures, including the duration of computer work, and (2) the moderating effect of psychosocial work exposures on the relation between computer work and UEMSP. Using 2007-2008 Québec survey data on 2478 workers, we carried out gender-stratified multivariable logistic regression modeling and two-way interaction analyses. In both genders, odds of UEMSP were higher with exposure to high physical work demands and emotionally demanding work. Additionally among women, UEMSP were associated with duration of occupational computer exposure, sexual harassment, tense situations when dealing with clients, high quantitative demands and lack of prospects for promotion, and among men, with low coworker support, episodes of unemployment, low job security and contradictory work demands. Among women, the effect of computer work on UEMSP was considerably increased in the presence of emotionally demanding work, and may also be moderated by low recognition at work, contradictory work demands, and low supervisor support. These results suggest that the relations between UEMSP and computer work are moderated by psychosocial work exposures and that the relations between working conditions and UEMSP are somewhat different for each gender, highlighting the complexity of these relations and the importance of considering gender.

  14. Impacts of an extreme cyclone event on landscape-scale savanna fire, productivity and greenhouse gas emissions

    International Nuclear Information System (INIS)

    Hutley, L B; Maier, S W; Evans, B J; Beringer, J; Cook, G D; Razon, E

    2013-01-01

    North Australian tropical savanna accounts for 12% of the world’s total savanna land cover. Accordingly, understanding processes that govern carbon, water and energy exchange within this biome is critical to global carbon and water budgeting. Climate and disturbances drive ecosystem carbon dynamics. Savanna ecosystems of the coastal and sub-coastal of north Australia experience a unique combination of climatic extremes and are in a state of near constant disturbance from fire events (1 in 3 years), storms resulting in windthrow (1 in 5–10 years) and mega-cyclones (1 in 500–1000 years). Critically, these disturbances occur over large areas creating a spatial and temporal mosaic of carbon sources and sinks. We quantify the impact on gross primary productivity (GPP) and fire occurrence from a tropical mega-cyclone, tropical Cyclone Monica (TC Monica), which affected 10 400 km 2 of savanna across north Australia, resulting in the mortality and severe structural damage to ∼140 million trees. We estimate a net carbon equivalent emission of 43 Tg of CO 2 -e using the moderate resolution imaging spectroradiometer (MODIS) GPP (MOD17A2) to quantify spatial and temporal patterns pre- and post-TC Monica. GPP was suppressed for four years after the event, equivalent to a loss of GPP of 0.5 Tg C over this period. On-ground fuel loads were estimated to potentially release 51.2 Mt CO 2 -e, equivalent to ∼10% of Australia’s accountable greenhouse gas emissions. We present a simple carbon balance to examine the relative importance of frequency versus impact for a number of key disturbance processes such as fire, termite consumption and intense but infrequent mega-cyclones. Our estimates suggested that fire and termite consumption had a larger impact on Net Biome Productivity than infrequent mega-cyclones. We demonstrate the importance of understanding how climate variability and disturbance impacts savanna dynamics in the context of the increasing interest in

  15. Full Scale Test SSP 34m blade, edgewise loading LTT. Extreme load and PoC_InvE Data report

    DEFF Research Database (Denmark)

    Nielsen, Magda; Roczek-Sieradzan, Agnieszka; Jensen, Find Mølholt

    This report is the second report covering the research and demonstration project “Eksperimentel vingeforskning: Strukturelle mekanismer i nutidens og fremtidens store vinger under kombineret last”, supported by the EUDP program. A 34m wind turbine blade from SSP-Technology A/S has been tested...... in edgewise direction (LTT). The blade has been submitted to thorough examination by means of strain gauges, displacement transducers and a 3D optical measuring system. This data report presents results obtained during full scale testing of the blade up to 80% Risø load, where 80% Risø load corresponds to 100...... stresses in the adhesive joints. Test results from measurements with the reinforcement have been compared to results without the coupling. The report presents only the relevant results for the 80% Risø load and the results applicable for the investigation of the influence of the invention on the profile...

  16. Fan-out Estimation in Spin-based Quantum Computer Scale-up.

    Science.gov (United States)

    Nguyen, Thien; Hill, Charles D; Hollenberg, Lloyd C L; James, Matthew R

    2017-10-17

    Solid-state spin-based qubits offer good prospects for scaling based on their long coherence times and nexus to large-scale electronic scale-up technologies. However, high-threshold quantum error correction requires a two-dimensional qubit array operating in parallel, posing significant challenges in fabrication and control. While architectures incorporating distributed quantum control meet this challenge head-on, most designs rely on individual control and readout of all qubits with high gate densities. We analysed the fan-out routing overhead of a dedicated control line architecture, basing the analysis on a generalised solid-state spin qubit platform parameterised to encompass Coulomb confined (e.g. donor based spin qubits) or electrostatically confined (e.g. quantum dot based spin qubits) implementations. The spatial scalability under this model is estimated using standard electronic routing methods and present-day fabrication constraints. Based on reasonable assumptions for qubit control and readout we estimate 10 2 -10 5 physical qubits, depending on the quantum interconnect implementation, can be integrated and fanned-out independently. Assuming relatively long control-free interconnects the scalability can be extended. Ultimately, the universal quantum computation may necessitate a much higher number of integrated qubits, indicating that higher dimensional electronics fabrication and/or multiplexed distributed control and readout schemes may be the preferredstrategy for large-scale implementation.

  17. Large-scale computation at PSI scientific achievements and future requirements

    International Nuclear Information System (INIS)

    Adelmann, A.; Markushin, V.

    2008-11-01

    ' (SNSP-HPCN) is discussing this complex. Scientific results which are made possible by PSI's engagement at CSCS (named Horizon) are summarised and PSI's future high-performance computing requirements are evaluated. The data collected shows the current situation and a 5 year extrapolation of the users' needs with respect to HPC resources is made. In consequence this report can serve as a basis for future strategic decisions with respect to a non-existing HPC road-map for PSI. PSI's institutional HPC area started hardware-wise approximately in 1999 with the assembly of a 32-processor LINUX cluster called Merlin. Merlin was upgraded several times, lastly in 2007. The Merlin cluster at PSI is used for small scale parallel jobs, and is the only general purpose computing system at PSI. Several dedicated small scale clusters followed the Merlin scheme. Many of the clusters are used to analyse data from experiments at PSI or CERN, because dedicated clusters are most efficient. The intellectual and financial involvement of the procurement (including a machine update in 2007) results in a PSI share of 25 % of the available computing resources at CSCS. The (over) usage of available computing resources by PSI scientists is demonstrated. We actually get more computing cycles than we have paid for. The reason is the fair share policy that is implemented on the Horizon machine. This policy allows us to get cycles, with a low priority, even when our bi-monthly share is used. Five important observations can be drawn from the analysis of the scientific output and the survey of future requirements of main PSI HPC users: (1) High Performance Computing is a main pillar in many important PSI research areas; (2) there is a lack in the order of 10 times the current computing resources (measured in available core-hours per year); (3) there is a trend to use in the order of 600 processors per average production run; (4) the disk and tape storage growth is dramatic; (5) small HPC clusters located

  18. Large-scale computation at PSI scientific achievements and future requirements

    Energy Technology Data Exchange (ETDEWEB)

    Adelmann, A.; Markushin, V

    2008-11-15

    and Networking' (SNSP-HPCN) is discussing this complex. Scientific results which are made possible by PSI's engagement at CSCS (named Horizon) are summarised and PSI's future high-performance computing requirements are evaluated. The data collected shows the current situation and a 5 year extrapolation of the users' needs with respect to HPC resources is made. In consequence this report can serve as a basis for future strategic decisions with respect to a non-existing HPC road-map for PSI. PSI's institutional HPC area started hardware-wise approximately in 1999 with the assembly of a 32-processor LINUX cluster called Merlin. Merlin was upgraded several times, lastly in 2007. The Merlin cluster at PSI is used for small scale parallel jobs, and is the only general purpose computing system at PSI. Several dedicated small scale clusters followed the Merlin scheme. Many of the clusters are used to analyse data from experiments at PSI or CERN, because dedicated clusters are most efficient. The intellectual and financial involvement of the procurement (including a machine update in 2007) results in a PSI share of 25 % of the available computing resources at CSCS. The (over) usage of available computing resources by PSI scientists is demonstrated. We actually get more computing cycles than we have paid for. The reason is the fair share policy that is implemented on the Horizon machine. This policy allows us to get cycles, with a low priority, even when our bi-monthly share is used. Five important observations can be drawn from the analysis of the scientific output and the survey of future requirements of main PSI HPC users: (1) High Performance Computing is a main pillar in many important PSI research areas; (2) there is a lack in the order of 10 times the current computing resources (measured in available core-hours per year); (3) there is a trend to use in the order of 600 processors per average production run; (4) the disk and tape storage growth

  19. Large-Scale Skin Resurfacing of the Upper Extremity in Pediatric Patients Using a Pre-Expanded Intercostal Artery Perforator Flap.

    Science.gov (United States)

    Wei, Jiao; Herrler, Tanja; Gu, Bin; Yang, Mei; Li, Qingfeng; Dai, Chuanchang; Xie, Feng

    2018-05-01

    The repair of extensive upper limb skin lesions in pediatric patients is extremely challenging due to substantial limitations of flap size and donor-site morbidity. We aimed to create an oversize preexpanded flap based on intercostal artery perforators for large-scale resurfacing of the upper extremity in children. Between March 2013 and August 2016, 11 patients underwent reconstructive treatment for extensive skin lesions in the upper extremity using a preexpanded intercostal artery perforator flap. Preoperatively, 2 to 4 candidate perforators were selected as potential pedicle vessels based on duplex ultrasound examination. After tissue expander implantation in the thoracodorsal area, regular saline injections were performed until the expanded flap was sufficient in size. Then, a pedicled flap was formed to resurface the skin lesion of the upper limb. The pedicles were transected 3 weeks after flap transfer. Flap survival, complications, and long-term outcome were evaluated. The average time of tissue expansion was 133 days with a mean final volume of 1713 mL. The thoracoabdominal flaps were based on 2 to 6 pedicles and used to resurface a mean skin defect area of 238 cm ranging from 180 to 357 cm. In all cases, primary donor-site closure was achieved. Marginal necrosis was seen in 5 cases. The reconstructed limbs showed satisfactory outcome in both aesthetic and functional aspects. The preexpanded intercostal artery perforator flap enables 1-block repair of extensive upper limb skin lesions. Due to limited donor-site morbidity and a pedicled technique, this resurfacing approach represents a useful tool especially in pediatric patients.

  20. Validación de una escala de afrontamiento frente a riesgos extremos Validation of a scale measuring coping with extreme risks

    Directory of Open Access Journals (Sweden)

    Esperanza López-Vázquez

    2004-06-01

    Full Text Available OBJETIVO: Validar, en población mexicana, una escala de afrontamiento, adaptada de la escala francesa "Echèlle Toulousaine de Coping". MATERIAL Y MÉTODOS: En el otoño de 2001 la escala se aplicó a 209 sujetos que habitaban en diversas zonas de México, expuestos a cinco diferentes tipos de riesgo extremo, entre los cuales se distinguen riesgos naturales y riesgos industriales. Se analizó la capacidad discriminatoria de los reactivos, así como la estructura factorial y la consistencia interna de la prueba. Se emplearon los métodos U de Mann-Whitney, análisis factorial de componentes principales y alpha de Cronbach. RESULTADOS: La escala final es de 26 reactivos que se agruparon en dos factores: afrontamiento activo y afrontamiento pasivo. La consistencia interna del instrumento es muy alta, tanto en la muestra total como en la submuestra de riesgos naturales y riesgos industriales. CONCLUSIONES: La escala de afrontamiento que proponemos es confiable y válida para la población mexicanaOBJECTIVE: The objective of this study was to validate, in Mexico, the French coping scale "Échelle Toulousaine de Coping". MATERIAL AND METHODS: In the fall of 2001, the scale questionnaire was applied to 209 subjects living in different areas of Mexico, exposed to five different types of extreme natural or industrial risks. The discriminatory capacity of the items, as well as the factorial structure and internal consistency of the scale, were analyzed using Mann-Whitney's U test, principal components factorial analysis, and Cronbach's alpha. RESULTS: The final scale was composed of 26 items forming two groups: active coping and passive coping. Internal consistency of the instrument was high, both in the total sample and in the subsample of natural and industrial risks. CONCLUSIONS: The coping scale is reliable and valid for the Mexican population

  1. Development of computational infrastructure to support hyper-resolution large-ensemble hydrology simulations from local-to-continental scales

    Data.gov (United States)

    National Aeronautics and Space Administration — Development of computational infrastructure to support hyper-resolution large-ensemble hydrology simulations from local-to-continental scales A move is currently...

  2. Field limit and nano-scale surface topography of superconducting radio-frequency cavity made of extreme type II superconductor

    Science.gov (United States)

    Kubo, Takayuki

    2015-06-01

    The field limit of a superconducting radio-frequency cavity made of a type II superconductor with a large Ginzburg-Landau parameter is studied, taking the effects of nano-scale surface topography into account. If the surface is ideally flat, the field limit is imposed by the superheating field. On the surface of cavity, however, nano-defects almost continuously distribute and suppress the superheating field everywhere. The field limit is imposed by an effective superheating field given by the product of the superheating field for an ideal flat surface and a suppression factor that contains the effects of nano-defects. A nano-defect is modeled by a triangular groove with a depth smaller than the penetration depth. An analytical formula for the suppression factor of bulk and multilayer superconductors is derived in the framework of the London theory. As an immediate application, the suppression factor of the dirty Nb processed by electropolishing is evaluated by using results of surface topographic study. The estimated field limit is consistent with the present record field of nitrogen-doped Nb cavities. Suppression factors of surfaces of other bulk and multilayer superconductors, and those after various surface processing technologies, can also be evaluated by using the formula.

  3. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    Science.gov (United States)

    Dong, Xianlei; Bollen, Johan

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  4. Visual analysis of inter-process communication for large-scale parallel computing.

    Science.gov (United States)

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  5. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    Directory of Open Access Journals (Sweden)

    Xianlei Dong

    Full Text Available Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  6. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  7. Risk factors for neck and upper extremity disorders among computers users and the effect of interventions: an overview of systematic reviews.

    Science.gov (United States)

    Andersen, Johan H; Fallentin, Nils; Thomsen, Jane F; Mikkelsen, Sigurd

    2011-05-12

    To summarize systematic reviews that 1) assessed the evidence for causal relationships between computer work and the occurrence of carpal tunnel syndrome (CTS) or upper extremity musculoskeletal disorders (UEMSDs), or 2) reported on intervention studies among computer users/or office workers. PubMed, Embase, CINAHL and Web of Science were searched for reviews published between 1999 and 2010. Additional publications were provided by content area experts. The primary author extracted all data using a purpose-built form, while two of the authors evaluated the quality of the reviews using recommended standard criteria from AMSTAR; disagreements were resolved by discussion. The quality of evidence syntheses in the included reviews was assessed qualitatively for each outcome and for the interventions. Altogether, 1,349 review titles were identified, 47 reviews were retrieved for full text relevance assessment, and 17 reviews were finally included as being relevant and of sufficient quality. The degrees of focus and rigorousness of these 17 reviews were highly variable. Three reviews on risk factors for carpal tunnel syndrome were rated moderate to high quality, 8 reviews on risk factors for UEMSDs ranged from low to moderate/high quality, and 6 reviews on intervention studies were of moderate to high quality. The quality of the evidence for computer use as a risk factor for CTS was insufficient, while the evidence for computer use and UEMSDs was moderate regarding pain complaints and limited for specific musculoskeletal disorders. From the reviews on intervention studies no strong evidence based recommendations could be given. Computer use is associated with pain complaints, but it is still not very clear if this association is causal. The evidence for specific disorders or diseases is limited. No effective interventions have yet been documented.

  8. Risk factors for neck and upper extremity disorders among computers users and the effect of interventions: an overview of systematic reviews.

    Directory of Open Access Journals (Sweden)

    Johan H Andersen

    Full Text Available BACKGROUND: To summarize systematic reviews that 1 assessed the evidence for causal relationships between computer work and the occurrence of carpal tunnel syndrome (CTS or upper extremity musculoskeletal disorders (UEMSDs, or 2 reported on intervention studies among computer users/or office workers. METHODOLOGY/PRINCIPAL FINDINGS: PubMed, Embase, CINAHL and Web of Science were searched for reviews published between 1999 and 2010. Additional publications were provided by content area experts. The primary author extracted all data using a purpose-built form, while two of the authors evaluated the quality of the reviews using recommended standard criteria from AMSTAR; disagreements were resolved by discussion. The quality of evidence syntheses in the included reviews was assessed qualitatively for each outcome and for the interventions. Altogether, 1,349 review titles were identified, 47 reviews were retrieved for full text relevance assessment, and 17 reviews were finally included as being relevant and of sufficient quality. The degrees of focus and rigorousness of these 17 reviews were highly variable. Three reviews on risk factors for carpal tunnel syndrome were rated moderate to high quality, 8 reviews on risk factors for UEMSDs ranged from low to moderate/high quality, and 6 reviews on intervention studies were of moderate to high quality. The quality of the evidence for computer use as a risk factor for CTS was insufficient, while the evidence for computer use and UEMSDs was moderate regarding pain complaints and limited for specific musculoskeletal disorders. From the reviews on intervention studies no strong evidence based recommendations could be given. CONCLUSIONS/SIGNIFICANCE: Computer use is associated with pain complaints, but it is still not very clear if this association is causal. The evidence for specific disorders or diseases is limited. No effective interventions have yet been documented.

  9. 3D fast adaptive correlation imaging for large-scale gravity data based on GPU computation

    Science.gov (United States)

    Chen, Z.; Meng, X.; Guo, L.; Liu, G.

    2011-12-01

    In recent years, large scale gravity data sets have been collected and employed to enhance gravity problem-solving abilities of tectonics studies in China. Aiming at the large scale data and the requirement of rapid interpretation, previous authors have carried out a lot of work, including the fast gradient module inversion and Euler deconvolution depth inversion ,3-D physical property inversion using stochastic subspaces and equivalent storage, fast inversion using wavelet transforms and a logarithmic barrier method. So it can be say that 3-D gravity inversion has been greatly improved in the last decade. Many authors added many different kinds of priori information and constraints to deal with nonuniqueness using models composed of a large number of contiguous cells of unknown property and obtained good results. However, due to long computation time, instability and other shortcomings, 3-D physical property inversion has not been widely applied to large-scale data yet. In order to achieve 3-D interpretation with high efficiency and precision for geological and ore bodies and obtain their subsurface distribution, there is an urgent need to find a fast and efficient inversion method for large scale gravity data. As an entirely new geophysical inversion method, 3D correlation has a rapid development thanks to the advantage of requiring no a priori information and demanding small amount of computer memory. This method was proposed to image the distribution of equivalent excess masses of anomalous geological bodies with high resolution both longitudinally and transversely. In order to tranform the equivalence excess masses into real density contrasts, we adopt the adaptive correlation imaging for gravity data. After each 3D correlation imaging, we change the equivalence into density contrasts according to the linear relationship, and then carry out forward gravity calculation for each rectangle cells. Next, we compare the forward gravity data with real data, and

  10. SWAP OBSERVATIONS OF THE LONG-TERM, LARGE-SCALE EVOLUTION OF THE EXTREME-ULTRAVIOLET SOLAR CORONA

    Energy Technology Data Exchange (ETDEWEB)

    Seaton, Daniel B.; De Groof, Anik; Berghmans, David; Nicula, Bogdan [Royal Observatory of Belgium-SIDC, Avenue Circulaire 3, B-1180 Brussels (Belgium); Shearer, Paul [Department of Mathematics, 2074 East Hall, University of Michigan, 530 Church Street, Ann Arbor, MI 48109-1043 (United States)

    2013-11-01

    The Sun Watcher with Active Pixels and Image Processing (SWAP) EUV solar telescope on board the Project for On-Board Autonomy 2 spacecraft has been regularly observing the solar corona in a bandpass near 17.4 nm since 2010 February. With a field of view of 54 × 54 arcmin, SWAP provides the widest-field images of the EUV corona available from the perspective of the Earth. By carefully processing and combining multiple SWAP images, it is possible to produce low-noise composites that reveal the structure of the EUV corona to relatively large heights. A particularly important step in this processing was to remove instrumental stray light from the images by determining and deconvolving SWAP's point-spread function from the observations. In this paper, we use the resulting images to conduct the first-ever study of the evolution of the large-scale structure of the corona observed in the EUV over a three year period that includes the complete rise phase of solar cycle 24. Of particular note is the persistence over many solar rotations of bright, diffuse features composed of open magnetic fields that overlie polar crown filaments and extend to large heights above the solar surface. These features appear to be related to coronal fans, which have previously been observed in white-light coronagraph images and, at low heights, in the EUV. We also discuss the evolution of the corona at different heights above the solar surface and the evolution of the corona over the course of the solar cycle by hemisphere.

  11. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment

  12. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Anthony B., E-mail: acosta@northwestern.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Green, Jason R., E-mail: jason.green@umb.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125 (United States)

    2013-08-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.

  13. Cross-scale Efficient Tensor Contractions for Coupled Cluster Computations Through Multiple Programming Model Backends

    Energy Technology Data Exchange (ETDEWEB)

    Ibrahim, Khaled Z. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Epifanovsky, Evgeny [Q-Chem, Inc., Pleasanton, CA (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Krylov, Anna I. [Univ. of Southern California, Los Angeles, CA (United States). Dept. of Chemistry

    2016-07-26

    Coupled-cluster methods provide highly accurate models of molecular structure by explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix-matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts to extend the Libtensor framework to work in the distributed memory environment in a scalable and energy efficient manner. We achieve up to 240 speedup compared with the best optimized shared memory implementation. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures, (Cray XC30&XC40, BlueGene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance. Nevertheless, we preserve a uni ed interface to both programming models to maintain the productivity of computational quantum chemists.

  14. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    International Nuclear Information System (INIS)

    Costa, Anthony B.; Green, Jason R.

    2013-01-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N 2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra

  15. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2013-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the 'A-Train' platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (MERRA), stratify the comparisons using a classification of the 'cloud scenes' from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically 'sharded' by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will

  16. Scaling strength distributions in quasi-brittle materials from micro-to macro-scales: A computational approach to modeling Nature-inspired structural ceramics

    International Nuclear Information System (INIS)

    Genet, Martin; Couegnat, Guillaume; Tomsia, Antoni P.; Ritchie, Robert O.

    2014-01-01

    This paper presents an approach to predict the strength distribution of quasi-brittle materials across multiple length-scales, with emphasis on Nature-inspired ceramic structures. It permits the computation of the failure probability of any structure under any mechanical load, solely based on considerations of the microstructure and its failure properties by naturally incorporating the statistical and size-dependent aspects of failure. We overcome the intrinsic limitations of single periodic unit-based approaches by computing the successive failures of the material components and associated stress redistributions on arbitrary numbers of periodic units. For large size samples, the microscopic cells are replaced by a homogenized continuum with equivalent stochastic and damaged constitutive behavior. After establishing the predictive capabilities of the method, and illustrating its potential relevance to several engineering problems, we employ it in the study of the shape and scaling of strength distributions across differing length-scales for a particular quasi-brittle system. We find that the strength distributions display a Weibull form for samples of size approaching the periodic unit; however, these distributions become closer to normal with further increase in sample size before finally reverting to a Weibull form for macroscopic sized samples. In terms of scaling, we find that the weakest link scaling applies only to microscopic, and not macroscopic scale, samples. These findings are discussed in relation to failure patterns computed at different size-scales. (authors)

  17. Computational psychotherapy research: scaling up the evaluation of patient-provider interactions.

    Science.gov (United States)

    Imel, Zac E; Steyvers, Mark; Atkins, David C

    2015-03-01

    In psychotherapy, the patient-provider interaction contains the treatment's active ingredients. However, the technology for analyzing the content of this interaction has not fundamentally changed in decades, limiting both the scale and specificity of psychotherapy research. New methods are required to "scale up" to larger evaluation tasks and "drill down" into the raw linguistic data of patient-therapist interactions. In the current article, we demonstrate the utility of statistical text analysis models called topic models for discovering the underlying linguistic structure in psychotherapy. Topic models identify semantic themes (or topics) in a collection of documents (here, transcripts). We used topic models to summarize and visualize 1,553 psychotherapy and drug therapy (i.e., medication management) transcripts. Results showed that topic models identified clinically relevant content, including affective, relational, and intervention related topics. In addition, topic models learned to identify specific types of therapist statements associated with treatment-related codes (e.g., different treatment approaches, patient-therapist discussions about the therapeutic relationship). Visualizations of semantic similarity across sessions indicate that topic models identify content that discriminates between broad classes of therapy (e.g., cognitive-behavioral therapy vs. psychodynamic therapy). Finally, predictive modeling demonstrated that topic model-derived features can classify therapy type with a high degree of accuracy. Computational psychotherapy research has the potential to scale up the study of psychotherapy to thousands of sessions at a time. We conclude by discussing the implications of computational methods such as topic models for the future of psychotherapy research and practice. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  18. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  19. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  20. Effects of participatory ergonomic intervention on the development of upper extremity musculoskeletal disorders and disability in office employees using a computer

    Science.gov (United States)

    Baydur, Hakan; Ergör, Alp; Demiral, Yücel; Akalın, Elif

    2016-01-01

    Objective: To evaluate the participatory ergonomic method on the development of upper extremity musculoskeletal disorders and disability in office employees. Methods: This study is a randomized controlled intervention study. It comprised 116 office workers using computers. Those in the intervention group were taught office ergonomics and the risk assessment method. Cox proportional hazards model and generalized estimating equations (GEEs) were used. Results: In the 10-month postintervention follow-up, the possibility of developing symptoms was 50.9%. According to multivariate analysis results, the possibility of developing symptoms on the right side of the neck and in the right wrist and hand was significantly less in the intervention group than in the control group (pergonomic intervention decreases the possibility of musculoskeletal complaints and disability/symptom level in office workers. PMID:27108647

  1. Effects of participatory ergonomic intervention on the development of upper extremity musculoskeletal disorders and disability in office employees using a computer.

    Science.gov (United States)

    Baydur, Hakan; Ergör, Alp; Demiral, Yücel; Akalın, Elif

    2016-06-16

    To evaluate the participatory ergonomic method on the development of upper extremity musculoskeletal disorders and disability in office employees. This study is a randomized controlled intervention study. It comprised 116 office workers using computers. Those in the intervention group were taught office ergonomics and the risk assessment method. Cox proportional hazards model and generalized estimating equations (GEEs) were used. In the 10-month postintervention follow-up, the possibility of developing symptoms was 50.9%. According to multivariate analysis results, the possibility of developing symptoms on the right side of the neck and in the right wrist and hand was significantly less in the intervention group than in the control group (pergonomic intervention decreases the possibility of musculoskeletal complaints and disability/symptom level in office workers.

  2. Mandelbrot's Extremism

    NARCIS (Netherlands)

    Beirlant, J.; Schoutens, W.; Segers, J.J.J.

    2004-01-01

    In the sixties Mandelbrot already showed that extreme price swings are more likely than some of us think or incorporate in our models.A modern toolbox for analyzing such rare events can be found in the field of extreme value theory.At the core of extreme value theory lies the modelling of maxima

  3. Big Data solutions on a small scale: Evaluating accessible high-performance computing for social research

    Directory of Open Access Journals (Sweden)

    Dhiraj Murthy

    2014-11-01

    Full Text Available Though full of promise, Big Data research success is often contingent on access to the newest, most advanced, and often expensive hardware systems and the expertise needed to build and implement such systems. As a result, the accessibility of the growing number of Big Data-capable technology solutions has often been the preserve of business analytics. Pay as you store/process services like Amazon Web Services have opened up possibilities for smaller scale Big Data projects. There is high demand for this type of research in the digital humanities and digital sociology, for example. However, scholars are increasingly finding themselves at a disadvantage as available data sets of interest continue to grow in size and complexity. Without a large amount of funding or the ability to form interdisciplinary partnerships, only a select few find themselves in the position to successfully engage Big Data. This article identifies several notable and popular Big Data technologies typically implemented using large and extremely powerful cloud-based systems and investigates the feasibility and utility of development of Big Data analytics systems implemented using low-cost commodity hardware in basic and easily maintainable configurations for use within academic social research. Through our investigation and experimental case study (in the growing field of social Twitter analytics, we found that not only are solutions like Cloudera’s Hadoop feasible, but that they can also enable robust, deep, and fruitful research outcomes in a variety of use-case scenarios across the disciplines.

  4. Large scale statistics for computational verification of grain growth simulations with experiments

    International Nuclear Information System (INIS)

    Demirel, Melik C.; Kuprat, Andrew P.; George, Denise C.; Straub, G.K.; Misra, Amit; Alexander, Kathleen B.; Rollett, Anthony D.

    2002-01-01

    It is known that by controlling microstructural development, desirable properties of materials can be achieved. The main objective of our research is to understand and control interface dominated material properties, and finally, to verify experimental results with computer simulations. We have previously showed a strong similarity between small-scale grain growth experiments and anisotropic three-dimensional simulations obtained from the Electron Backscattered Diffraction (EBSD) measurements. Using the same technique, we obtained 5170-grain data from an Aluminum-film (120 (micro)m thick) with a columnar grain structure. Experimentally obtained starting microstructure and grain boundary properties are input for the three-dimensional grain growth simulation. In the computational model, minimization of the interface energy is the driving force for the grain boundary motion. The computed evolved microstructure is compared with the final experimental microstructure, after annealing at 550 C. Characterization of the structures and properties of grain boundary networks (GBN) to produce desirable microstructures is one of the fundamental problems in interface science. There is an ongoing research for the development of new experimental and analytical techniques in order to obtain and synthesize information related to GBN. The grain boundary energy and mobility data were characterized by Electron Backscattered Diffraction (EBSD) technique and Atomic Force Microscopy (AFM) observations (i.e., for ceramic MgO and for the metal Al). Grain boundary energies are extracted from triple junction (TJ) geometry considering the local equilibrium condition at TJ's. Relative boundary mobilities were also extracted from TJ's through a statistical/multiscale analysis. Additionally, there are recent theoretical developments of grain boundary evolution in microstructures. In this paper, a new technique for three-dimensional grain growth simulations was used to simulate interface migration

  5. Translation and cross-cultural adaptation of the lower extremity functional scale into a Brazilian Portuguese version and validation on patients with knee injuries.

    Science.gov (United States)

    Metsavaht, Leonardo; Leporace, Gustavo; Riberto, Marcelo; Sposito, Maria Matilde M; Del Castillo, Letícia N C; Oliveira, Liszt P; Batista, Luiz Alberto

    2012-11-01

    Clinical measurement. To translate and culturally adapt the Lower Extremity Functional Scale (LEFS) into a Brazilian Portuguese version, and to test the construct and content validity and reliability of this version in patients with knee injuries. There is no Brazilian Portuguese version of an instrument to assess the function of the lower extremity after orthopaedic injury. The translation of the original English version of the LEFS into a Brazilian Portuguese version was accomplished using standard guidelines and tested in 31 patients with knee injuries. Subsequently, 87 patients with a variety of knee disorders completed the Brazilian Portuguese LEFS, the Medical Outcomes Study 36-Item Short-Form Health Survey, the Western Ontario and McMaster Universities Osteoarthritis Index, and the International Knee Documentation Committee Subjective Knee Evaluation Form and a visual analog scale for pain. All patients were retested within 2 days to determine reliability of these measures. Validation was assessed by determining the level of association between the Brazilian Portuguese LEFS and the other outcome measures. Reliability was documented by calculating internal consistency, test-retest reliability, and standard error of measurement. The Brazilian Portuguese LEFS had a high level of association with the physical component of the Medical Outcomes Study 36-Item Short-Form Health Survey (r = 0.82), the Western Ontario and McMaster Universities Osteoarthritis Index (r = 0.87), the International Knee Documentation Committee Subjective Knee Evaluation Form (r = 0.82), and the pain visual analog scale (r = -0.60) (all, Pcoefficient = 0.957) of the Brazilian Portuguese version of the LEFS were high. The standard error of measurement was low (3.6) and the agreement was considered high, demonstrated by the small differences between test and retest and the narrow limit of agreement, as observed in Bland-Altman and survival-agreement plots. The translation of the LEFS into a

  6. Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing.

    Science.gov (United States)

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance; Messina, Thomas; Fan, Hongtao; Jaeger, Edward; Stephens, Susan

    2013-06-27

    Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available

  7. Medium/small-scale computers HITACHI M-620, M-630, and M-640 systems: the aim of development and characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Oshima, N; Saiki, Y; Sunaga, K [Hitachi, Ltd., Tokyo (Japan)

    1990-10-01

    The medium/small-scale HITACHI M-620, M-630, and M-640 computer systems are outlined. Every system is featured by the configuration usable as a medium or small-scale host computer in offices, the function connectable with large-scale host computers, the performance of 5-50 times those of conventional office computers, easy operation and fast processing. As features of the hardware, the one-board CPU and small integrated cubicle structure containing the CPU board, high-speed large-capacity magnetic disk storage device, various kinds of controllers and others are illustrated. As features of the software, the OS (VOS K) featured by the virtual data space control (VDSA) and relational database (RDB) functions, EAGLE/4GL (effective approach to achieving high level software productivity/4th generation language), STEP (self training environmental support program) and simple end user language ACE3/E2 are outlined. 7 figs.

  8. [Adverse Effect Predictions Based on Computational Toxicology Techniques and Large-scale Databases].

    Science.gov (United States)

    Uesawa, Yoshihiro

    2018-01-01

     Understanding the features of chemical structures related to the adverse effects of drugs is useful for identifying potential adverse effects of new drugs. This can be based on the limited information available from post-marketing surveillance, assessment of the potential toxicities of metabolites and illegal drugs with unclear characteristics, screening of lead compounds at the drug discovery stage, and identification of leads for the discovery of new pharmacological mechanisms. This present paper describes techniques used in computational toxicology to investigate the content of large-scale spontaneous report databases of adverse effects, and it is illustrated with examples. Furthermore, volcano plotting, a new visualization method for clarifying the relationships between drugs and adverse effects via comprehensive analyses, will be introduced. These analyses may produce a great amount of data that can be applied to drug repositioning.

  9. An advanced course in computational nuclear physics bridging the scales from quarks to neutron stars

    CERN Document Server

    Lombardo, Maria; Kolck, Ubirajara

    2017-01-01

    This graduate-level text collects and synthesizes a series of ten lectures on the nuclear quantum many-body problem. Starting from our current understanding of the underlying forces, it presents recent advances within the field of lattice quantum chromodynamics before going on to discuss effective field theories, central many-body methods like Monte Carlo methods, coupled cluster theories, the similarity renormalization group approach, Green’s function methods and large-scale diagonalization approaches. Algorithmic and computational advances show particular promise for breakthroughs in predictive power, including proper error estimates, a better understanding of the underlying effective degrees of freedom and of the respective forces at play. Enabled by recent improvements in theoretical, experimental and numerical techniques, the state-of-the art applications considered in this volume span the entire range, from our smallest components – quarks and gluons as the mediators of the strong force – to the c...

  10. Progresses in application of computational ?uid dynamic methods to large scale wind turbine aerodynamics?

    Institute of Scientific and Technical Information of China (English)

    Zhenyu ZHANG; Ning ZHAO; Wei ZHONG; Long WANG; Bofeng XU

    2016-01-01

    The computational ?uid dynamics (CFD) methods are applied to aerody-namic problems for large scale wind turbines. The progresses including the aerodynamic analyses of wind turbine pro?les, numerical ?ow simulation of wind turbine blades, evalu-ation of aerodynamic performance, and multi-objective blade optimization are discussed. Based on the CFD methods, signi?cant improvements are obtained to predict two/three-dimensional aerodynamic characteristics of wind turbine airfoils and blades, and the vorti-cal structure in their wake ?ows is accurately captured. Combining with a multi-objective genetic algorithm, a 1.5 MW NH-1500 optimized blade is designed with high e?ciency in wind energy conversion.

  11. Computational Cosmology: from the Early Universe to the Large Scale Structure

    Directory of Open Access Journals (Sweden)

    Peter Anninos

    1998-09-01

    Full Text Available In order to account for the observable Universe, any comprehensive theory or model of cosmology must draw from many disciplines of physics, including gauge theories of strong and weak interactions, the hydrodynamics and microphysics of baryonic matter, electromagnetic fields, and spacetime curvature, for example. Although it is difficult to incorporate all these physical elements into a single complete model of our Universe, advances in computing methods and technologies have contributed significantly towards our understanding of cosmological models, the Universe, and astrophysical processes within them. A sample of numerical calculations addressing specific issues in cosmology are reviewed in this article: from the Big Bang singularity dynamics to the fundamental interactions of gravitational waves; from the quark--hadron phase transition to the large scale structure of the Universe. The emphasis, although not exclusively, is on thosecalculations designed to test different models of cosmology against the observed Universe.

  12. Computational Cosmology: from the Early Universe to the Large Scale Structure

    Directory of Open Access Journals (Sweden)

    Anninos Peter

    2001-01-01

    Full Text Available In order to account for the observable Universe, any comprehensive theory or model of cosmology must draw from many disciplines of physics, including gauge theories of strong and weak interactions, the hydrodynamics and microphysics of baryonic matter, electromagnetic fields, and spacetime curvature, for example. Although it is difficult to incorporate all these physical elements into a single complete model of our Universe, advances in computing methods and technologies have contributed significantly towards our understanding of cosmological models, the Universe, and astrophysical processes within them. A sample of numerical calculations (and numerical methods applied to specific issues in cosmology are reviewed in this article: from the Big Bang singularity dynamics to the fundamental interactions of gravitational waves; from the quark-hadron phase transition to the large scale structure of the Universe. The emphasis, although not exclusively, is on those calculations designed to test different models of cosmology against the observed Universe.

  13. Computational Cosmology: From the Early Universe to the Large Scale Structure.

    Science.gov (United States)

    Anninos, Peter

    2001-01-01

    In order to account for the observable Universe, any comprehensive theory or model of cosmology must draw from many disciplines of physics, including gauge theories of strong and weak interactions, the hydrodynamics and microphysics of baryonic matter, electromagnetic fields, and spacetime curvature, for example. Although it is difficult to incorporate all these physical elements into a single complete model of our Universe, advances in computing methods and technologies have contributed significantly towards our understanding of cosmological models, the Universe, and astrophysical processes within them. A sample of numerical calculations (and numerical methods applied to specific issues in cosmology are reviewed in this article: from the Big Bang singularity dynamics to the fundamental interactions of gravitational waves; from the quark-hadron phase transition to the large scale structure of the Universe. The emphasis, although not exclusively, is on those calculations designed to test different models of cosmology against the observed Universe.

  14. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    Science.gov (United States)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial

  15. Auto-Scaling of Geo-Based Image Processing in an OpenStack Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Sanggoo Kang

    2016-08-01

    Full Text Available Cloud computing is a base platform for the distribution of large volumes of data and high-performance image processing on the Web. Despite wide applications in Web-based services and their many benefits, geo-spatial applications based on cloud computing technology are still developing. Auto-scaling realizes automatic scalability, i.e., the scale-out and scale-in processing of virtual servers in a cloud computing environment. This study investigates the applicability of auto-scaling to geo-based image processing algorithms by comparing the performance of a single virtual server and multiple auto-scaled virtual servers under identical experimental conditions. In this study, the cloud computing environment is built with OpenStack, and four algorithms from the Orfeo toolbox are used for practical geo-based image processing experiments. The auto-scaling results from all experimental performance tests demonstrate applicable significance with respect to cloud utilization concerning response time. Auto-scaling contributes to the development of web-based satellite image application services using cloud-based technologies.

  16. Computer-aided classification of forest cover types from small scale aerial photography

    Science.gov (United States)

    Bliss, John C.; Bonnicksen, Thomas M.; Mace, Thomas H.

    1980-11-01

    The US National Park Service must map forest cover types over extensive areas in order to fulfill its goal of maintaining or reconstructing presettlement vegetation within national parks and monuments. Furthermore, such cover type maps must be updated on a regular basis to document vegetation changes. Computer-aided classification of small scale aerial photography is a promising technique for generating forest cover type maps efficiently and inexpensively. In this study, seven cover types were classified with an overall accuracy of 62 percent from a reproduction of a 1∶120,000 color infrared transparency of a conifer-hardwood forest. The results were encouraging, given the degraded quality of the photograph and the fact that features were not centered, as well as the lack of information on lens vignetting characteristics to make corrections. Suggestions are made for resolving these problems in future research and applications. In addition, it is hypothesized that the overall accuracy is artificially low because the computer-aided classification more accurately portrayed the intermixing of cover types than the hand-drawn maps to which it was compared.

  17. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  18. Engineering integrated digital circuits with allosteric ribozymes for scaling up molecular computation and diagnostics.

    Science.gov (United States)

    Penchovsky, Robert

    2012-10-19

    Here we describe molecular implementations of integrated digital circuits, including a three-input AND logic gate, a two-input multiplexer, and 1-to-2 decoder using allosteric ribozymes. Furthermore, we demonstrate a multiplexer-decoder circuit. The ribozymes are designed to seek-and-destroy specific RNAs with a certain length by a fully computerized procedure. The algorithm can accurately predict one base substitution that alters the ribozyme's logic function. The ability to sense the length of RNA molecules enables single ribozymes to be used as platforms for multiple interactions. These ribozymes can work as integrated circuits with the functionality of up to five logic gates. The ribozyme design is universal since the allosteric and substrate domains can be altered to sense different RNAs. In addition, the ribozymes can specifically cleave RNA molecules with triplet-repeat expansions observed in genetic disorders such as oculopharyngeal muscular dystrophy. Therefore, the designer ribozymes can be employed for scaling up computing and diagnostic networks in the fields of molecular computing and diagnostics and RNA synthetic biology.

  19. Really Large Scale Computer Graphic Projection Using Lasers and Laser Substitutes

    Science.gov (United States)

    Rother, Paul

    1989-07-01

    This paper reflects on past laser projects to display vector scanned computer graphic images onto very large and irregular surfaces. Since the availability of microprocessors and high powered visible lasers, very large scale computer graphics projection have become a reality. Due to the independence from a focusing lens, lasers easily project onto distant and irregular surfaces and have been used for amusement parks, theatrical performances, concert performances, industrial trade shows and dance clubs. Lasers have been used to project onto mountains, buildings, 360° globes, clouds of smoke and water. These methods have proven successful in installations at: Epcot Theme Park in Florida; Stone Mountain Park in Georgia; 1984 Olympics in Los Angeles; hundreds of Corporate trade shows and thousands of musical performances. Using new ColorRayTM technology, the use of costly and fragile lasers is no longer necessary. Utilizing fiber optic technology, the functionality of lasers can be duplicated for new and exciting projection possibilities. The use of ColorRayTM technology has enjoyed worldwide recognition in conjunction with Pink Floyd and George Michaels' world wide tours.

  20. An eye model for computational dosimetry using a multi-scale voxel phantom

    International Nuclear Information System (INIS)

    Caracappa, P.F.; Rhodes, A.; Fiedler, D.

    2013-01-01

    The lens of the eye is a radiosensitive tissue with cataract formation being the major concern. Recently reduced recommended dose limits to the lens of the eye have made understanding the dose to this tissue of increased importance. Due to memory limitations, the voxel resolution of computational phantoms used for radiation dose calculations is too large to accurately represent the dimensions of the eye. A revised eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and is then transformed into a high-resolution voxel model. This eye model is combined with an existing set of whole body models to form a multi-scale voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole body model are developed. When the Lattice Overlay method, the simpler of the two to define, is utilized, the computational penalty in terms of speed is noticeable and the figure of merit for the eye dose tally decreases by as much as a factor of two. When the Voxel Substitution method is applied, the penalty in speed is nearly trivial and the impact on the tally figure of merit is comparatively smaller. The origin of this difference in the code behavior may warrant further investigation

  1. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  3. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  4. Opportunistic Computing with Lobster: Lessons Learned from Scaling up to 25k Non-Dedicated Cores

    Science.gov (United States)

    Wolf, Matthias; Woodard, Anna; Li, Wenzhao; Hurtado Anampa, Kenyi; Yannakopoulos, Anna; Tovar, Benjamin; Donnelly, Patrick; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2017-10-01

    We previously described Lobster, a workflow management tool for exploiting volatile opportunistic computing resources for computation in HEP. We will discuss the various challenges that have been encountered while scaling up the simultaneous CPU core utilization and the software improvements required to overcome these challenges. Categories: Workflows can now be divided into categories based on their required system resources. This allows the batch queueing system to optimize assignment of tasks to nodes with the appropriate capabilities. Within each category, limits can be specified for the number of running jobs to regulate the utilization of communication bandwidth. System resource specifications for a task category can now be modified while a project is running, avoiding the need to restart the project if resource requirements differ from the initial estimates. Lobster now implements time limits on each task category to voluntarily terminate tasks. This allows partially completed work to be recovered. Workflow dependency specification: One workflow often requires data from other workflows as input. Rather than waiting for earlier workflows to be completed before beginning later ones, Lobster now allows dependent tasks to begin as soon as sufficient input data has accumulated. Resource monitoring: Lobster utilizes a new capability in Work Queue to monitor the system resources each task requires in order to identify bottlenecks and optimally assign tasks. The capability of the Lobster opportunistic workflow management system for HEP computation has been significantly increased. We have demonstrated efficient utilization of 25 000 non-dedicated cores and achieved a data input rate of 30 Gb/s and an output rate of 500GB/h. This has required new capabilities in task categorization, workflow dependency specification, and resource monitoring.

  5. Spatial and temporal patterns of bank failure during extreme flood events: Evidence of nonlinearity and self-organised criticality at the basin scale?

    Science.gov (United States)

    Thompson, C. J.; Croke, J. C.; Grove, J. R.

    2012-04-01

    Non-linearity in physical systems provides a conceptual framework to explain complex patterns and form that are derived from complex internal dynamics rather than external forcings, and can be used to inform modeling and improve landscape management. One process that has been investigated previously to explore the existence of self-organised critical system (SOC) in river systems at the basin-scale is bank failure. Spatial trends in bank failure have been previously quantified to determine if the distribution of bank failures at the basin scale exhibit the necessary power law magnitude/frequency distributions. More commonly bank failures are investigated at a small-scale using several cross-sections with strong emphasis on local-scale factors such as bank height, cohesion and hydraulic properties. Advancing our understanding of non-linearity in such processes, however, requires many more studies where both the spatial and temporal measurements of the process can be used to investigate the existence or otherwise of non-linearity and self-organised criticality. This study presents measurements of bank failure throughout the Lockyer catchment in southeast Queensland, Australia, which experienced an extreme flood event in January 2011 resulting in the loss of human lives and geomorphic channel change. The most dominant form of fluvial adjustment consisted of changes in channel geometry and notably widespread bank failures, which were readily identifiable as 'scalloped' shaped failure scarps. The spatial extents of these were mapped using high-resolution LiDAR derived digital elevation model and were verified by field surveys and air photos. Pre-flood event LiDAR coverage for the catchment also existed allowing direct comparison of the magnitude and frequency of bank failures from both pre and post-flood time periods. Data were collected and analysed within a GIS framework and investigated for power-law relationships. Bank failures appeared random and occurred

  6. A computational approach to modeling cellular-scale blood flow in complex geometry

    Science.gov (United States)

    Balogh, Peter; Bagchi, Prosenjit

    2017-04-01

    We present a computational methodology for modeling cellular-scale blood flow in arbitrary and highly complex geometry. Our approach is based on immersed-boundary methods, which allow modeling flows in arbitrary geometry while resolving the large deformation and dynamics of every blood cell with high fidelity. The present methodology seamlessly integrates different modeling components dealing with stationary rigid boundaries of complex shape, moving rigid bodies, and highly deformable interfaces governed by nonlinear elasticity. Thus it enables us to simulate 'whole' blood suspensions flowing through physiologically realistic microvascular networks that are characterized by multiple bifurcating and merging vessels, as well as geometrically complex lab-on-chip devices. The focus of the present work is on the development of a versatile numerical technique that is able to consider deformable cells and rigid bodies flowing in three-dimensional arbitrarily complex geometries over a diverse range of scenarios. After describing the methodology, a series of validation studies are presented against analytical theory, experimental data, and previous numerical results. Then, the capability of the methodology is demonstrated by simulating flows of deformable blood cells and heterogeneous cell suspensions in both physiologically realistic microvascular networks and geometrically intricate microfluidic devices. It is shown that the methodology can predict several complex microhemodynamic phenomena observed in vascular networks and microfluidic devices. The present methodology is robust and versatile, and has the potential to scale up to very large microvascular networks at organ levels.

  7. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2015-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, MODIS, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. HySDS is a Hybrid-Cloud Science Data System that has been developed and applied under NASA AIST, MEaSUREs, and ACCESS grants. HySDS uses the SciFlow workflow engine to partition analysis workflows into parallel tasks (e.g. segmenting by time or space) that are pushed into a durable job queue. The tasks are "pulled" from the queue by worker Virtual Machines (VM's) and executed in an on-premise Cloud (Eucalyptus or OpenStack) or at Amazon in the public Cloud or govCloud. In this way, years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the transferred data. We are using HySDS to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a MEASURES grant. We will present the architecture of HySDS, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. Our system demonstrates how one can pull A-Train variables (Levels 2 & 3) on-demand into the Amazon Cloud, and cache only those variables that are heavily used, so that any number of compute jobs can be

  8. Noise analysis of genome-scale protein synthesis using a discrete computational model of translation

    Energy Technology Data Exchange (ETDEWEB)

    Racle, Julien; Hatzimanikatis, Vassily, E-mail: vassily.hatzimanikatis@epfl.ch [Laboratory of Computational Systems Biotechnology, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland); Swiss Institute of Bioinformatics (SIB), CH-1015 Lausanne (Switzerland); Stefaniuk, Adam Jan [Laboratory of Computational Systems Biotechnology, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland)

    2015-07-28

    Noise in genetic networks has been the subject of extensive experimental and computational studies. However, very few of these studies have considered noise properties using mechanistic models that account for the discrete movement of ribosomes and RNA polymerases along their corresponding templates (messenger RNA (mRNA) and DNA). The large size of these systems, which scales with the number of genes, mRNA copies, codons per mRNA, and ribosomes, is responsible for some of the challenges. Additionally, one should be able to describe the dynamics of ribosome exchange between the free ribosome pool and those bound to mRNAs, as well as how mRNA species compete for ribosomes. We developed an efficient algorithm for stochastic simulations that addresses these issues and used it to study the contribution and trade-offs of noise to translation properties (rates, time delays, and rate-limiting steps). The algorithm scales linearly with the number of mRNA copies, which allowed us to study the importance of genome-scale competition between mRNAs for the same ribosomes. We determined that noise is minimized under conditions maximizing the specific synthesis rate. Moreover, sensitivity analysis of the stochastic system revealed the importance of the elongation rate in the resultant noise, whereas the translation initiation rate constant was more closely related to the average protein synthesis rate. We observed significant differences between our results and the noise properties of the most commonly used translation models. Overall, our studies demonstrate that the use of full mechanistic models is essential for the study of noise in translation and transcription.

  9. Computational Fluid Dynamics (CFD) Simulations of Jet Mixing in Tanks of Different Scales

    Science.gov (United States)

    Breisacher, Kevin; Moder, Jeffrey

    2010-01-01

    For long-duration in-space storage of cryogenic propellants, an axial jet mixer is one concept for controlling tank pressure and reducing thermal stratification. Extensive ground-test data from the 1960s to the present exist for tank diameters of 10 ft or less. The design of axial jet mixers for tanks on the order of 30 ft diameter, such as those planned for the Ares V Earth Departure Stage (EDS) LH2 tank, will require scaling of available experimental data from much smaller tanks, as well designing for microgravity effects. This study will assess the ability for Computational Fluid Dynamics (CFD) to handle a change of scale of this magnitude by performing simulations of existing ground-based axial jet mixing experiments at two tank sizes differing by a factor of ten. Simulations of several axial jet configurations for an Ares V scale EDS LH2 tank during low Earth orbit (LEO) coast are evaluated and selected results are also presented. Data from jet mixing experiments performed in the 1960s by General Dynamics with water at two tank sizes (1 and 10 ft diameter) are used to evaluate CFD accuracy. Jet nozzle diameters ranged from 0.032 to 0.25 in. for the 1 ft diameter tank experiments and from 0.625 to 0.875 in. for the 10 ft diameter tank experiments. Thermally stratified layers were created in both tanks prior to turning on the jet mixer. Jet mixer efficiency was determined by monitoring the temperatures on thermocouple rakes in the tanks to time when the stratified layer was mixed out. Dye was frequently injected into the stratified tank and its penetration recorded. There were no velocities or turbulence quantities available in the experimental data. A commercially available, time accurate, multi-dimensional CFD code with free surface tracking (FLOW-3D from Flow Science, Inc.) is used for the simulations presented. Comparisons are made between computed temperatures at various axial locations in the tank at different times and those observed experimentally. The

  10. Development and validation of the computer technology literacy self-assessment scale for Taiwanese elementary school students.

    Science.gov (United States)

    Chang, Chiung-Sui

    2008-01-01

    The purpose of this study was to describe the development and validation of an instrument to identify various dimensions of the computer technology literacy self-assessment scale (CTLS) for elementary school students. The instrument included five CTLS dimensions (subscales): the technology operation skills, the computer usages concepts, the attitudes toward computer technology, the learning with technology, and the Internet operation skills. Participants were 1,539 elementary school students in Taiwan. Data analysis indicated that the instrument developed in the study had satisfactory validity and reliability. Correlations analysis supported the legitimacy of using multiple dimensions in representing students' computer technology literacy. Significant differences were found between male and female students, and between grades on some CTLS dimensions. Suggestions are made for use of the instrument to examine complicated interplays between students' computer behaviors and their computer technology literacy.

  11. A Multi-Time Scale Morphable Software Milieu for Polymorphous Computing Architectures (PCA) - Composable, Scalable Systems

    National Research Council Canada - National Science Library

    Skjellum, Anthony

    2004-01-01

    Polymorphous Computing Architectures (PCA) rapidly "morph" (reorganize) software and hardware configurations in order to achieve high performance on computation styles ranging from specialized streaming to general threaded applications...

  12. Vortex-Concept for Radioactivity Release Prevention at NPP: Development of Computational Model of Lab-Scale Experimental Setup

    Energy Technology Data Exchange (ETDEWEB)

    Ullah, Sana; Sung, Yim Man; Park, Jin Soo; Sung Hyung Jin [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The experimental validation of the vortex-like air curtain concept and use of an appropriate CFD modelling approach for analyzing the problem becomes crucial. A lab-scale experimental setup is designed to validate the proposed concept and CFD modeling approach as a part of validation process. In this study, a computational model of this lab-scale experiment setup is developed using open source CFD code OpenFOAM. The computational results will be compared with experimental data for validation purposes in future, when experimental data is available. 1) A computation model of a lab-scale experimental setup, designed to validate the concept of artificial vortex-like airflow generation for application to radioactivity dispersion prevention in the event of severe accident, was developed. 2) The mesh sensitivity study was performed and a mesh of about 2 million cells was found to be sufficient for this setup.

  13. Enabling systematic, harmonised and large-scale biofilms data computation: the Biofilms Experiment Workbench.

    Science.gov (United States)

    Pérez-Rodríguez, Gael; Glez-Peña, Daniel; Azevedo, Nuno F; Pereira, Maria Olívia; Fdez-Riverola, Florentino; Lourenço, Anália

    2015-03-01

    Biofilms are receiving increasing attention from the biomedical community. Biofilm-like growth within human body is considered one of the key microbial strategies to augment resistance and persistence during infectious processes. The Biofilms Experiment Workbench is a novel software workbench for the operation and analysis of biofilms experimental data. The goal is to promote the interchange and comparison of data among laboratories, providing systematic, harmonised and large-scale data computation. The workbench was developed with AIBench, an open-source Java desktop application framework for scientific software development in the domain of translational biomedicine. Implementation favours free and open-source third-parties, such as the R statistical package, and reaches for the Web services of the BiofOmics database to enable public experiment deposition. First, we summarise the novel, free, open, XML-based interchange format for encoding biofilms experimental data. Then, we describe the execution of common scenarios of operation with the new workbench, such as the creation of new experiments, the importation of data from Excel spreadsheets, the computation of analytical results, the on-demand and highly customised construction of Web publishable reports, and the comparison of results between laboratories. A considerable and varied amount of biofilms data is being generated, and there is a critical need to develop bioinformatics tools that expedite the interchange and comparison of microbiological and clinical results among laboratories. We propose a simple, open-source software infrastructure which is effective, extensible and easy to understand. The workbench is freely available for non-commercial use at http://sing.ei.uvigo.es/bew under LGPL license. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Cloud computing as a new technology trend in education

    OpenAIRE

    Шамина, Ольга Борисовна; Буланова, Татьяна Валентиновна

    2014-01-01

    The construction and operation of extremely large-scale, commodity-computer datacenters was the key necessary enabler of Cloud Computing. Cloud Computing could offer services make a good profit for using in education. With Cloud Computing it is possible to increase the quality of education, improve communicative culture and give to teachers and students new application opportunities.

  15. Open Problems in Network-aware Data Management in Exa-scale Computing and Terabit Networking Era

    Energy Technology Data Exchange (ETDEWEB)

    Balman, Mehmet; Byna, Surendra

    2011-12-06

    Accessing and managing large amounts of data is a great challenge in collaborative computing environments where resources and users are geographically distributed. Recent advances in network technology led to next-generation high-performance networks, allowing high-bandwidth connectivity. Efficient use of the network infrastructure is necessary in order to address the increasing data and compute requirements of large-scale applications. We discuss several open problems, evaluate emerging trends, and articulate our perspectives in network-aware data management.

  16. Micromagnetic computer simulations of spin waves in nanometre-scale patterned magnetic elements

    International Nuclear Information System (INIS)

    Kim, Sang-Koog

    2010-01-01

    Current needs for further advances in the nanotechnologies of information-storage and -processing devices have attracted a great deal of interest in spin (magnetization) dynamics in nanometre-scale patterned magnetic elements. For instance, the unique dynamic characteristics of non-uniform magnetic microstructures such as various types of domain walls, magnetic vortices and antivortices, as well as spin wave dynamics in laterally restricted thin-film geometries, have been at the centre of extensive and intensive researches. Understanding the fundamentals of their unique spin structure as well as their robust and novel dynamic properties allows us to implement new functionalities into existing or future devices. Although experimental tools and theoretical approaches are effective means of understanding the fundamentals of spin dynamics and of gaining new insights into them, the limitations of those same tools and approaches have left gaps of unresolved questions in the pertinent physics. As an alternative, however, micromagnetic modelling and numerical simulation has recently emerged as a powerful tool for the study of a variety of phenomena related to spin dynamics of nanometre-scale magnetic elements. In this review paper, I summarize the recent results of simulations of the excitation and propagation and other novel wave characteristics of spin waves, highlighting how the micromagnetic computer simulation approach contributes to an understanding of spin dynamics of nanomagnetism and considering some of the merits of numerical simulation studies. Many examples of micromagnetic modelling for numerical calculations, employing various dimensions and shapes of patterned magnetic elements, are given. The current limitations of continuum micromagnetic modelling and of simulations based on the Landau-Lifshitz-Gilbert equation of motion of magnetization are also discussed, along with further research directions for spin-wave studies.

  17. Micromagnetic computer simulations of spin waves in nanometre-scale patterned magnetic elements

    Science.gov (United States)

    Kim, Sang-Koog

    2010-07-01

    Current needs for further advances in the nanotechnologies of information-storage and -processing devices have attracted a great deal of interest in spin (magnetization) dynamics in nanometre-scale patterned magnetic elements. For instance, the unique dynamic characteristics of non-uniform magnetic microstructures such as various types of domain walls, magnetic vortices and antivortices, as well as spin wave dynamics in laterally restricted thin-film geometries, have been at the centre of extensive and intensive researches. Understanding the fundamentals of their unique spin structure as well as their robust and novel dynamic properties allows us to implement new functionalities into existing or future devices. Although experimental tools and theoretical approaches are effective means of understanding the fundamentals of spin dynamics and of gaining new insights into them, the limitations of those same tools and approaches have left gaps of unresolved questions in the pertinent physics. As an alternative, however, micromagnetic modelling and numerical simulation has recently emerged as a powerful tool for the study of a variety of phenomena related to spin dynamics of nanometre-scale magnetic elements. In this review paper, I summarize the recent results of simulations of the excitation and propagation and other novel wave characteristics of spin waves, highlighting how the micromagnetic computer simulation approach contributes to an understanding of spin dynamics of nanomagnetism and considering some of the merits of numerical simulation studies. Many examples of micromagnetic modelling for numerical calculations, employing various dimensions and shapes of patterned magnetic elements, are given. The current limitations of continuum micromagnetic modelling and of simulations based on the Landau-Lifshitz-Gilbert equation of motion of magnetization are also discussed, along with further research directions for spin-wave studies.

  18. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  19. Scales

    Science.gov (United States)

    Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Examples of disorders that ...

  20. Computer simulation of immobilized pH gradients at acidic and alkaline extremes - A quest for extended pH intervals

    Science.gov (United States)

    Mosher, Richard A.; Bier, Milan; Righetti, Pier Giorgio

    1986-01-01

    Computer simulations of the concentration profiles of simple biprotic ampholytes with Delta pKs 1, 2, and 3, on immobilized pH gradients (IPG) at extreme pH values (pH 3-4 and pH 10-11) show markedly skewed steady-state profiles with increasing kurtosis at higher Delta pK values. Across neutrality, all the peaks are symmetric irrespective of their Delta pK values, but they show very high contribution to the conductivity of the background gel and significant alteration of the local buffering capacity. The problems of skewness, due to the exponential conductivity profiles at low and high pHs, and of gel burning due to a strong electroosmotic flow generated by the net charges in the gel matrix, also at low and high pHs, are solved by incorporating in the IPG gel a strong viscosity gradient. This is generated by a gradient of linear polyacrylamide which is trapped in the gel by the polymerization process.

  1. A multi-scale computational study on the mechanism of Streptococcus pneumoniae Nicotinamidase (SpNic).

    Science.gov (United States)

    Ion, Bogdan F; Kazim, Erum; Gauld, James W

    2014-09-29

    Nicotinamidase (Nic) is a key zinc-dependent enzyme in NAD metabolism that catalyzes the hydrolysis of nicotinamide to give nicotinic acid. A multi-scale computational approach has been used to investigate the catalytic mechanism, substrate binding and roles of active site residues of Nic from Streptococcus pneumoniae (SpNic). In particular, density functional theory (DFT), molecular dynamics (MD) and ONIOM quantum mechanics/molecular mechanics (QM/MM) methods have been employed. The overall mechanism occurs in two stages: (i) formation of a thioester enzyme-intermediate (IC2) and (ii) hydrolysis of the thioester bond to give the products. The polar protein environment has a significant effect in stabilizing reaction intermediates and in particular transition states. As a result, both stages effectively occur in one step with Stage 1, formation of IC2, being rate limiting barrier with a cost of 53.5 kJ·mol-1 with respect to the reactant complex, RC. The effects of dispersion interactions on the overall mechanism were also considered but were generally calculated to have less significant effects with the overall mechanism being unchanged. In addition, the active site lysyl (Lys103) is concluded to likely play a role in stabilizing the thiolate of Cys136 during the reaction.

  2. Single-polymer dynamics under constraints: scaling theory and computer experiment

    International Nuclear Information System (INIS)

    Milchev, Andrey

    2011-01-01

    The relaxation, diffusion and translocation dynamics of single linear polymer chains in confinement is briefly reviewed with emphasis on the comparison between theoretical scaling predictions and observations from experiment or, most frequently, from computer simulations. Besides cylindrical, spherical and slit-like constraints, related problems such as the chain dynamics in a random medium and the translocation dynamics through a nanopore are also considered. Another particular kind of confinement is imposed by polymer adsorption on attractive surfaces or selective interfaces-a short overview of single-chain dynamics is also contained in this survey. While both theory and numerical experiments consider predominantly coarse-grained models of self-avoiding linear chain molecules with typically Rouse dynamics, we also note some recent studies which examine the impact of hydrodynamic interactions on polymer dynamics in confinement. In all of the aforementioned cases we focus mainly on the consequences of imposed geometric restrictions on single-chain dynamics and try to check our degree of understanding by assessing the agreement between theoretical predictions and observations. (topical review)

  3. A Multi-Scale Computational Study on the Mechanism of Streptococcus pneumoniae Nicotinamidase (SpNic

    Directory of Open Access Journals (Sweden)

    Bogdan F. Ion

    2014-09-01

    Full Text Available Nicotinamidase (Nic is a key zinc-dependent enzyme in NAD metabolism that catalyzes the hydrolysis of nicotinamide to give nicotinic acid. A multi-scale computational approach has been used to investigate the catalytic mechanism, substrate binding and roles of active site residues of Nic from Streptococcus pneumoniae (SpNic. In particular, density functional theory (DFT, molecular dynamics (MD and ONIOM quantum mechanics/molecular mechanics (QM/MM methods have been employed. The overall mechanism occurs in two stages: (i formation of a thioester enzyme-intermediate (IC2 and (ii hydrolysis of the thioester bond to give the products. The polar protein environment has a significant effect in stabilizing reaction intermediates and in particular transition states. As a result, both stages effectively occur in one step with Stage 1, formation of IC2, being rate limiting barrier with a cost of 53.5 kJ•mol−1 with respect to the reactant complex, RC. The effects of dispersion interactions on the overall mechanism were also considered but were generally calculated to have less significant effects with the overall mechanism being unchanged. In addition, the active site lysyl (Lys103 is concluded to likely play a role in stabilizing the thiolate of Cys136 during the reaction.

  4. Computational Modelling of Large Scale Phage Production Using a Two-Stage Batch Process

    Directory of Open Access Journals (Sweden)

    Konrad Krysiak-Baltyn

    2018-04-01

    Full Text Available Cost effective and scalable methods for phage production are required to meet an increasing demand for phage, as an alternative to antibiotics. Computational models can assist the optimization of such production processes. A model is developed here that can simulate the dynamics of phage population growth and production in a two-stage, self-cycling process. The model incorporates variable infection parameters as a function of bacterial growth rate and employs ordinary differential equations, allowing application to a setup with multiple reactors. The model provides simple cost estimates as a function of key operational parameters including substrate concentration, feed volume and cycling times. For the phage and bacteria pairing examined, costs and productivity varied by three orders of magnitude, with the lowest cost found to be most sensitive to the influent substrate concentration and low level setting in the first vessel. An example case study of phage production is also presented, showing how parameter values affect the production costs and estimating production times. The approach presented is flexible and can be used to optimize phage production at laboratory or factory scale by minimizing costs or maximizing productivity.

  5. Comparison of Computational and Experimental Microphone Array Results for an 18%-Scale Aircraft Model

    Science.gov (United States)

    Lockard, David P.; Humphreys, William M.; Khorrami, Mehdi R.; Fares, Ehab; Casalino, Damiano; Ravetta, Patricio A.

    2015-01-01

    An 18%-scale, semi-span model is used as a platform for examining the efficacy of microphone array processing using synthetic data from numerical simulations. Two hybrid RANS/LES codes coupled with Ffowcs Williams-Hawkings solvers are used to calculate 97 microphone signals at the locations of an array employed in the NASA LaRC 14x22 tunnel. Conventional, DAMAS, and CLEAN-SC array processing is applied in an identical fashion to the experimental and computational results for three different configurations involving deploying and retracting the main landing gear and a part span flap. Despite the short time records of the numerical signals, the beamform maps are able to isolate the noise sources, and the appearance of the DAMAS synthetic array maps is generally better than those from the experimental data. The experimental CLEAN-SC maps are similar in quality to those from the simulations indicating that CLEAN-SC may have less sensitivity to background noise. The spectrum obtained from DAMAS processing of synthetic array data is nearly identical to the spectrum of the center microphone of the array, indicating that for this problem array processing of synthetic data does not improve spectral comparisons with experiment. However, the beamform maps do provide an additional means of comparison that can reveal differences that cannot be ascertained from spectra alone.

  6. Proceedings of joint meeting of the 6th simulation science symposium and the NIFS collaboration research 'large scale computer simulation'

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-03-01

    Joint meeting of the 6th Simulation Science Symposium and the NIFS Collaboration Research 'Large Scale Computer Simulation' was held on December 12-13, 2002 at National Institute for Fusion Science, with the aim of promoting interdisciplinary collaborations in various fields of computer simulations. The present meeting attended by more than 40 people consists of the 11 invited and 22 contributed papers, of which topics were extended not only to fusion science but also to related fields such as astrophysics, earth science, fluid dynamics, molecular dynamics, computer science etc. (author)

  7. Extreme cosmos

    CERN Document Server

    Gaensler, Bryan

    2011-01-01

    The universe is all about extremes. Space has a temperature 270°C below freezing. Stars die in catastrophic supernova explosions a billion times brighter than the Sun. A black hole can generate 10 million trillion volts of electricity. And hypergiants are stars 2 billion kilometres across, larger than the orbit of Jupiter. Extreme Cosmos provides a stunning new view of the way the Universe works, seen through the lens of extremes: the fastest, hottest, heaviest, brightest, oldest, densest and even the loudest. This is an astronomy book that not only offers amazing facts and figures but also re

  8. Design, development and integration of a large scale multiple source X-ray computed tomography system

    International Nuclear Information System (INIS)

    Malcolm, Andrew A.; Liu, Tong; Ng, Ivan Kee Beng; Teng, Wei Yuen; Yap, Tsi Tung; Wan, Siew Ping; Kong, Chun Jeng

    2013-01-01

    X-ray Computed Tomography (CT) allows visualisation of the physical structures in the interior of an object without physically opening or cutting it. This technology supports a wide range of applications in the non-destructive testing, failure analysis or performance evaluation of industrial products and components. Of the numerous factors that influence the performance characteristics of an X-ray CT system the energy level in the X-ray spectrum to be used is one of the most significant. The ability of the X-ray beam to penetrate a given thickness of a specific material is directly related to the maximum available energy level in the beam. Higher energy levels allow penetration of thicker components made of more dense materials. In response to local industry demand and in support of on-going research activity in the area of 3D X-ray imaging for industrial inspection the Singapore Institute of Manufacturing Technology (SIMTech) engaged in the design, development and integration of large scale multiple source X-ray computed tomography system based on X-ray sources operating at higher energies than previously available in the Institute. The system consists of a large area direct digital X-ray detector (410 x 410 mm), a multiple-axis manipulator system, a 225 kV open tube microfocus X-ray source and a 450 kV closed tube millifocus X-ray source. The 225 kV X-ray source can be operated in either transmission or reflection mode. The body of the 6-axis manipulator system is fabricated from heavy-duty steel onto which high precision linear and rotary motors have been mounted in order to achieve high accuracy, stability and repeatability. A source-detector distance of up to 2.5 m can be achieved. The system is controlled by a proprietary X-ray CT operating system developed by SIMTech. The system currently can accommodate samples up to 0.5 x 0.5 x 0.5 m in size with weight up to 50 kg. These specifications will be increased to 1.0 x 1.0 x 1.0 m and 100 kg in future

  9. Challenges in computational materials science: Multiple scales, multi-physics and evolving discontinuities

    NARCIS (Netherlands)

    Borst, de R.

    2008-01-01

    Novel experimental possibilities together with improvements in computer hardware as well as new concepts in computational mathematics and mechanics in particular multiscale methods are now, in principle, making it possible to derive and compute phenomena and material parameters at a macroscopic

  10. An evaluation of multi-probe locality sensitive hashing for computing similarities over web-scale query logs.

    Directory of Open Access Journals (Sweden)

    Graham Cormode

    Full Text Available Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines, computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH methods and evaluate four variants in a distributed computing environment (specifically, Hadoop. We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.

  11. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-03-27

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.

  12. Commercial applications of large-scale Research and Development computer simulation technologies

    International Nuclear Information System (INIS)

    Kuok Mee Ling; Pascal Chen; Wen Ho Lee

    1998-01-01

    The potential commercial applications of two large-scale R and D computer simulation technologies are presented. One such technology is based on the numerical solution of the hydrodynamics equations, and is embodied in the two-dimensional Eulerian code EULE2D, which solves the hydrodynamic equations with various models for the equation of state (EOS), constitutive relations and fracture mechanics. EULE2D is an R and D code originally developed to design and analyze conventional munitions for anti-armor penetrations such as shaped charges, explosive formed projectiles, and kinetic energy rods. Simulated results agree very well with actual experiments. A commercial application presented here is the design and simulation of shaped charges for oil and gas well bore perforation. The other R and D simulation technology is based on the numerical solution of Maxwell's partial differential equations of electromagnetics in space and time, and is implemented in the three-dimensional code FDTD-SPICE, which solves Maxwell's equations in the time domain with finite-differences in the three spatial dimensions and calls SPICE for information when nonlinear active devices are involved. The FDTD method has been used in the radar cross-section modeling of military aircrafts and many other electromagnetic phenomena. The coupling of FDTD method with SPICE, a popular circuit and device simulation program, provides a powerful tool for the simulation and design of microwave and millimeter-wave circuits containing nonlinear active semiconductor devices. A commercial application of FDTD-SPICE presented here is the simulation of a two-element active antenna system. The simulation results and the experimental measurements are in excellent agreement. (Author)

  13. Effect of calcified plaques on estimation of arterial stenosis of lower extremity in diabetic foot patients using multislice computed tomography angiography

    International Nuclear Information System (INIS)

    Yu Xiaojing; Jin Yan; Wang Ge; Li Chunzhi; Zhang Yi; Ren Hua

    2013-01-01

    Objective: To investigate the impacts of calcified plaques on estimation of arterial stenosis of lower extremity in diabetic foot patients using 16 -slice computed tomography angiography (MSCTA). Materials and Methods: Thirty-five patients (representing 38 cases)underwent both MSCTA and digital subtraction angiography (DSA) examinations. The arteries of lower extremity were divided into 15 anatomic segments, and the degree of artery stenosis in each segment was classified as normal, mildly, moderately. severely or occluded. The extent of calcification in each segment was also assessed on cross -sectional image of MDCTA and was classified as absent, mildly, moderately, or severely. Using DSA as the standard reference, the sensitivity, specificity, accuracy, Youden index, positive predictive value and negative predictive value of MSCTA were calculated. Agreement between MSCTA and DSA was assessed by Cohen's kappa statistics. Results: In the noncalcified, mildly and moderately calcified segments of the artery above the knee, for the detection of segments that had more than mild stenosis, the sensitivity, specificity, accuracy. Youden index, positive predictive value and negative predictive value of MSCTA were 97.1%, 98.7%, 98.2%, 95.8%, 97.0% and 98.7%, respectively. In the severely calcified segments of the artery above the knee, for the detection of segments that had more than mild stenosis, the sensitivity, specificity, accuracy, Youden index, positive predictive value and negative predictive value of MSCTA were 96.3%, 93.8%, 94.7%, 90.1%, 89.7% and 97.8%, respectively. In the noncalcified, mildly and moderately calcified segments of the artery below the knee, for the detection of segments that had more than mild stenosis, the sensitivity, specificity, accuracy, Youden index, positive predictive value and negative predictive value of MSCTA were 95.1%, 93.2%, 94.1%, 88.3%, 93.4% and 94.9%, respectively. In the severely calcified segments of the artery below the

  14. A leap in scale for computers; Un saut d`echelle pour les calculateurs

    Energy Technology Data Exchange (ETDEWEB)

    Barenco, A; Ekert, A; Macchiavello, Ch; Sanpera, A [Oxford Univ. (United Kingdom)

    1996-11-01

    The peculiar laws of quantum physics may lead to an upheaval in computing and information processing. Digital computers deal with bits 0 or 1. Quantum mechanics may provide in theory q-bits, a coherent superposition of both states 0 and 1. A set of N q-bits can represent concomitantly up to 2{sup N} states. This superposition allows massively parallel computing. The ``universal quantum computer`` from Deutsch was the first report mentioning this possibility. The first quantum algorithm shows how to factorize big numbers with quantum computer. This is a big theoretical issue for cryptography unachievable with digital computers. The technical difficulty is to implement a quantum computer. The main barriers are interference, decoherence and information retrieval. But recent experimental studies gives new hints to build quantum logic circuits. (O.M.). 4 refs.

  15. Multiscale approach including microfibril scale to assess elastic constants of cortical bone based on neural network computation and homogenization method.

    Science.gov (United States)

    Barkaoui, Abdelwahed; Chamekh, Abdessalem; Merzouki, Tarek; Hambli, Ridha; Mkaddem, Ali

    2014-03-01

    The complexity and heterogeneity of bone tissue require a multiscale modeling to understand its mechanical behavior and its remodeling mechanisms. In this paper, a novel multiscale hierarchical approach including microfibril scale based on hybrid neural network (NN) computation and homogenization equations was developed to link nanoscopic and macroscopic scales to estimate the elastic properties of human cortical bone. The multiscale model is divided into three main phases: (i) in step 0, the elastic constants of collagen-water and mineral-water composites are calculated by averaging the upper and lower Hill bounds; (ii) in step 1, the elastic properties of the collagen microfibril are computed using a trained NN simulation. Finite element calculation is performed at nanoscopic levels to provide a database to train an in-house NN program; and (iii) in steps 2-10 from fibril to continuum cortical bone tissue, homogenization equations are used to perform the computation at the higher scales. The NN outputs (elastic properties of the microfibril) are used as inputs for the homogenization computation to determine the properties of mineralized collagen fibril. The mechanical and geometrical properties of bone constituents (mineral, collagen, and cross-links) as well as the porosity were taken in consideration. This paper aims to predict analytically the effective elastic constants of cortical bone by modeling its elastic response at these different scales, ranging from the nanostructural to mesostructural levels. Our findings of the lowest scale's output were well integrated with the other higher levels and serve as inputs for the next higher scale modeling. Good agreement was obtained between our predicted results and literature data. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Multi-Scale Computational Modeling of Ni-Base Superalloy Brazed Joints for Gas Turbine Applications

    Science.gov (United States)

    Riggs, Bryan

    Brazed joints are commonly used in the manufacture and repair of aerospace components including high temperature gas turbine components made of Ni-base superalloys. For such critical applications, it is becoming increasingly important to account for the mechanical strength and reliability of the brazed joint. However, material properties of brazed joints are not readily available and methods for evaluating joint strength such as those listed in AWS C3.2 have inherent challenges compared with testing bulk materials. In addition, joint strength can be strongly influenced by the degree of interaction between the filler metal (FM) and the base metal (BM), the joint design, and presence of flaws or defects. As a result, there is interest in the development of a multi-scale computational model to predict the overall mechanical behavior and fitness-for-service of brazed joints. Therefore, the aim of this investigation was to generate data and methodology to support such a model for Ni-base superalloy brazed joints with conventional Ni-Cr-B based FMs. Based on a review of the technical literature a multi-scale modeling approach was proposed to predict the overall performance of brazed joints by relating mechanical properties to the brazed joint microstructure. This approach incorporates metallurgical characterization, thermodynamic/kinetic simulations, mechanical testing, fracture mechanics and finite element analysis (FEA) modeling to estimate joint properties based on the initial BM/FM composition and brazing process parameters. Experimental work was carried out in each of these areas to validate the multi-scale approach and develop improved techniques for quantifying brazed joint properties. Two Ni-base superalloys often used in gas turbine applications, Inconel 718 and CMSX-4, were selected for study and vacuum furnace brazed using two common FMs, BNi-2 and BNi-9. Metallurgical characterization of these brazed joints showed two primary microstructural regions; a soft

  17. Computational Techniques for Model Predictive Control of Large-Scale Systems with Continuous-Valued and Discrete-Valued Inputs

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2013-01-01

    Full Text Available We propose computational techniques for model predictive control of large-scale systems with both continuous-valued control inputs and discrete-valued control inputs, which are a class of hybrid systems. In the proposed method, we introduce the notion of virtual control inputs, which are obtained by relaxing discrete-valued control inputs to continuous variables. In online computation, first, we find continuous-valued control inputs and virtual control inputs minimizing a cost function. Next, using the obtained virtual control inputs, only discrete-valued control inputs at the current time are computed in each subsystem. In addition, we also discuss the effect of quantization errors. Finally, the effectiveness of the proposed method is shown by a numerical example. The proposed method enables us to reduce and decentralize the computation load.

  18. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Control modules C4, C6

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U. S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume is part of the manual related to the control modules for the newest updated version of this computational package.

  19. Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2017

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard

    2014-05-02

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In March 2013, NERSC, DOE?s Office of Advanced Scientific Computing Research (ASCR) and DOE?s Office of Fusion Energy Sciences (FES) held a review to characterize High Performance Computing (HPC) and storage requirements for FES research through 2017. This report is the result.

  20. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules, F9-F11

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes.

  1. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules, F9-F11

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes

  2. Further outlooks: extremely uncomfortable; Die weiteren Aussichten: extrem ungemuetlich

    Energy Technology Data Exchange (ETDEWEB)

    Resenhoeft, T.

    2006-07-01

    Climate is changing extremely in the last decades. Scientists dealing with extreme weather, should not only stare at computer simulations. They have also to turn towards psyche, seriously personal experiences, knowing statistics, relativise supposed sensational reports and last not least collecting more data. (GL)

  3. Google Earth Engine: a new cloud-computing platform for global-scale earth observation data and analysis

    Science.gov (United States)

    Moore, R. T.; Hansen, M. C.

    2011-12-01

    Google Earth Engine is a new technology platform that enables monitoring and measurement of changes in the earth's environment, at planetary scale, on a large catalog of earth observation data. The platform offers intrinsically-parallel computational access to thousands of computers in Google's data centers. Initial efforts have focused primarily on global forest monitoring and measurement, in support of REDD+ activities in the developing world. The intent is to put this platform into the hands of scientists and developing world nations, in order to advance the broader operational deployment of existing scientific methods, and strengthen the ability for public institutions and civil society to better understand, manage and report on the state of their natural resources. Earth Engine currently hosts online nearly the complete historical Landsat archive of L5 and L7 data collected over more than twenty-five years. Newly-collected Landsat imagery is downloaded from USGS EROS Center into Earth Engine on a daily basis. Earth Engine also includes a set of historical and current MODIS data products. The platform supports generation, on-demand, of spatial and temporal mosaics, "best-pixel" composites (for example to remove clouds and gaps in satellite imagery), as well as a variety of spectral indices. Supervised learning methods are available over the Landsat data catalog. The platform also includes a new application programming framework, or "API", that allows scientists access to these computational and data resources, to scale their current algorithms or develop new ones. Under the covers of the Google Earth Engine API is an intrinsically-parallel image-processing system. Several forest monitoring applications powered by this API are currently in development and expected to be operational in 2011. Combining science with massive data and technology resources in a cloud-computing framework can offer advantages of computational speed, ease-of-use and collaboration, as

  4. Extreme Programming: Maestro Style

    Science.gov (United States)

    Norris, Jeffrey; Fox, Jason; Rabe, Kenneth; Shu, I-Hsiang; Powell, Mark

    2009-01-01

    "Extreme Programming: Maestro Style" is the name of a computer programming methodology that has evolved as a custom version of a methodology, called extreme programming that has been practiced in the software industry since the late 1990s. The name of this version reflects its origin in the work of the Maestro team at NASA's Jet Propulsion Laboratory that develops software for Mars exploration missions. Extreme programming is oriented toward agile development of software resting on values of simplicity, communication, testing, and aggressiveness. Extreme programming involves use of methods of rapidly building and disseminating institutional knowledge among members of a computer-programming team to give all the members a shared view that matches the view of the customers for whom the software system is to be developed. Extreme programming includes frequent planning by programmers in collaboration with customers, continually examining and rewriting code in striving for the simplest workable software designs, a system metaphor (basically, an abstraction of the system that provides easy-to-remember software-naming conventions and insight into the architecture of the system), programmers working in pairs, adherence to a set of coding standards, collaboration of customers and programmers, frequent verbal communication, frequent releases of software in small increments of development, repeated testing of the developmental software by both programmers and customers, and continuous interaction between the team and the customers. The environment in which the Maestro team works requires the team to quickly adapt to changing needs of its customers. In addition, the team cannot afford to accept unnecessary development risk. Extreme programming enables the Maestro team to remain agile and provide high-quality software and service to its customers. However, several factors in the Maestro environment have made it necessary to modify some of the conventional extreme

  5. A Developmental Scale of Mental Computation with Part-Whole Numbers

    Science.gov (United States)

    Callingham, Rosemary; Watson, Jane

    2004-01-01

    In this article, data from a study of the mental computation competence of students in grades 3 to 10 are presented. Students responded to mental computation items, presented orally, that included operations applied to fractions, decimals and percents. The data were analysed using Rasch modelling techniques, and a six-level hierarchy of part-whole…

  6. Report of the Working Group on Large-Scale Computing in Aeronautics.

    Science.gov (United States)

    1984-06-01

    function and the use of drawings. In the hardware area, comtemporary large computer installations are quite powerful in terms of speed of computation as...critical to the competitive advantage of that member. He might then be willing to make them available to less advanced members under some business

  7. Design of large scale applications of secure multiparty computation : secure linear programming

    NARCIS (Netherlands)

    Hoogh, de S.J.A.

    2012-01-01

    Secure multiparty computation is a basic concept of growing interest in modern cryptography. It allows a set of mutually distrusting parties to perform a computation on their private information in such a way that as little as possible is revealed about each private input. The early results of

  8. Exascale Co-design for Modeling Materials in Extreme Environments

    Energy Technology Data Exchange (ETDEWEB)

    Germann, Timothy C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-07-08

    Computational materials science has provided great insight into the response of materials under extreme conditions that are difficult to probe experimentally. For example, shock-induced plasticity and phase transformation processes in single-crystal and nanocrystalline metals have been widely studied via large-scale molecular dynamics simulations, and many of these predictions are beginning to be tested at advanced 4th generation light sources such as the Advanced Photon Source (APS) and Linac Coherent Light Source (LCLS). I will describe our simulation predictions and their recent verification at LCLS, outstanding challenges in modeling the response of materials to extreme mechanical and radiation environments, and our efforts to tackle these as part of the multi-institutional, multi-disciplinary Exascale Co-design Center for Materials in Extreme Environments (ExMatEx). ExMatEx has initiated an early and deep collaboration between domain (computational materials) scientists, applied mathematicians, computer scientists, and hardware architects, in order to establish the relationships between algorithms, software stacks, and architectures needed to enable exascale-ready materials science application codes within the next decade. We anticipate that we will be able to exploit hierarchical, heterogeneous architectures to achieve more realistic large-scale simulations with adaptive physics refinement, and are using tractable application scale-bridging proxy application testbeds to assess new approaches and requirements. Such current scale-bridging strategies accumulate (or recompute) a distributed response database from fine-scale calculations, in a top-down rather than bottom-up multiscale approach.

  9. How extreme is extreme hourly precipitation?

    Science.gov (United States)

    Papalexiou, Simon Michael; Dialynas, Yannis G.; Pappas, Christoforos

    2016-04-01

    The importance of accurate representation of precipitation at fine time scales (e.g., hourly), directly associated with flash flood events, is crucial in hydrological design and prediction. The upper part of a probability distribution, known as the distribution tail, determines the behavior of extreme events. In general, and loosely speaking, tails can be categorized in two families: the subexponential and the hyperexponential family, with the first generating more intense and more frequent extremes compared to the latter. In past studies, the focus has been mainly on daily precipitation, with the Gamma distribution being the most popular model. Here, we investigate the behaviour of tails of hourly precipitation by comparing the upper part of empirical distributions of thousands of records with three general types of tails corresponding to the Pareto, Lognormal, and Weibull distributions. Specifically, we use thousands of hourly rainfall records from all over the USA. The analysis indicates that heavier-tailed distributions describe better the observed hourly rainfall extremes in comparison to lighter tails. Traditional representations of the marginal distribution of hourly rainfall may significantly deviate from observed behaviours of extremes, with direct implications on hydroclimatic variables modelling and engineering design.

  10. Extremal graph theory

    CERN Document Server

    Bollobas, Bela

    2004-01-01

    The ever-expanding field of extremal graph theory encompasses a diverse array of problem-solving methods, including applications to economics, computer science, and optimization theory. This volume, based on a series of lectures delivered to graduate students at the University of Cambridge, presents a concise yet comprehensive treatment of extremal graph theory.Unlike most graph theory treatises, this text features complete proofs for almost all of its results. Further insights into theory are provided by the numerous exercises of varying degrees of difficulty that accompany each chapter. A

  11. Development and testing of a scale to assess physician attitudes about handheld computers with decision support.

    Science.gov (United States)

    Ray, Midge N; Houston, Thomas K; Yu, Feliciano B; Menachemi, Nir; Maisiak, Richard S; Allison, Jeroan J; Berner, Eta S

    2006-01-01

    The authors developed and evaluated a rating scale, the Attitudes toward Handheld Decision Support Software Scale (H-DSS), to assess physician attitudes about handheld decision support systems. The authors conducted a prospective assessment of psychometric characteristics of the H-DSS including reliability, validity, and responsiveness. Participants were 82 Internal Medicine residents. A higher score on each of the 14 five-point Likert scale items reflected a more positive attitude about handheld DSS. The H-DSS score is the mean across the fourteen items. Attitudes toward the use of the handheld DSS were assessed prior to and six months after receiving the handheld device. Cronbach's Alpha was used to assess internal consistency reliability. Pearson correlations were used to estimate and detect significant associations between scale scores and other measures (validity). Paired sample t-tests were used to test for changes in the mean attitude scale score (responsiveness) and for differences between groups. Internal consistency reliability for the scale was alpha = 0.73. In testing validity, moderate correlations were noted between the attitude scale scores and self-reported Personal Digital Assistant (PDA) usage in the hospital (correlation coefficient = 0.55) and clinic (0.48), p DSS scale was reliable, valid, and responsive. The scale can be used to guide future handheld DSS development and implementation.

  12. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    International Nuclear Information System (INIS)

    Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.; Gaidamaka, Yuliya V.; Gudkova, Irina A.; Sopin, Eduard S.

    2015-01-01

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. For better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures

  13. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    Energy Technology Data Exchange (ETDEWEB)

    Shorgin, Sergey Ya.; Pechinkin, Alexander V. [Institute of Informatics Problems, Russian Academy of Sciences (Russian Federation); Samouylov, Konstantin E.; Gaidamaka, Yuliya V.; Gudkova, Irina A.; Sopin, Eduard S. [Telecommunication Systems Department, Peoples’ Friendship University of Russia (Russian Federation)

    2015-03-10

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. For better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.

  14. Large Scale Computing and Storage Requirements for Biological and Environmental Research

    Energy Technology Data Exchange (ETDEWEB)

    DOE Office of Science, Biological and Environmental Research Program Office (BER),

    2009-09-30

    In May 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of Biological and Environmental Research (BER) held a workshop to characterize HPC requirements for BER-funded research over the subsequent three to five years. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. Chief among them: scientific progress in BER-funded research is limited by current allocations of computational resources. Additionally, growth in mission-critical computing -- combined with new requirements for collaborative data manipulation and analysis -- will demand ever increasing computing, storage, network, visualization, reliability and service richness from NERSC. This report expands upon these key points and adds others. It also presents a number of"case studies" as significant representative samples of the needs of science teams within BER. Workshop participants were asked to codify their requirements in this"case study" format, summarizing their science goals, methods of solution, current and 3-5 year computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel,"multi-core" environment that is expected to dominate HPC architectures over the next few years.

  15. Computational and Experimental Investigations of the Molecular Scale Structure and Dynamics of Gologically Important Fluids and Mineral-Fluid Interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Bowers, Geoffrey [Alfred Univ., NY (United States)

    2017-04-05

    United States Department of Energy grant DE-FG02-10ER16128, “Computational and Spectroscopic Investigations of the Molecular Scale Structure and Dynamics of Geologically Important Fluids and Mineral-Fluid Interfaces” (Geoffrey M. Bowers, P.I.) focused on developing a molecular-scale understanding of processes that occur in fluids and at solid-fluid interfaces using the combination of spectroscopic, microscopic, and diffraction studies with molecular dynamics computer modeling. The work is intimately tied to the twin proposal at Michigan State University (DOE DE-FG02-08ER15929; same title: R. James Kirkpatrick, P.I. and A. Ozgur Yazaydin, co-P.I.).

  16. Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors.

    Science.gov (United States)

    Hines, Michael L; Eichner, Hubert; Schürmann, Felix

    2008-08-01

    Neuron tree topology equations can be split into two subtrees and solved on different processors with no change in accuracy, stability, or computational effort; communication costs involve only sending and receiving two double precision values by each subtree at each time step. Splitting cells is useful in attaining load balance in neural network simulations, especially when there is a wide range of cell sizes and the number of cells is about the same as the number of processors. For compute-bound simulations load balance results in almost ideal runtime scaling. Application of the cell splitting method to two published network models exhibits good runtime scaling on twice as many processors as could be effectively used with whole-cell balancing.

  17. Computational fluid dynamics for dense gas-solid fluidized beds: a multi-scale modeling strategy

    NARCIS (Netherlands)

    Hoef, van der M.A.; Sint Annaland, van M.; Kuipers, J.A.M.

    2005-01-01

    Dense gas-particle flows are encountered in a variety of industrially important processes for large scale production of fuels, fertilizers and base chemicals. The scale-up of these processes is often problematic and is related to the intrinsic complexities of these flows which are unfortunately not

  18. Computational fluid dynamics for dense gas-solid fluidized beds: a multi-scale modeling strategy

    NARCIS (Netherlands)

    van der Hoef, Martin Anton; van Sint Annaland, M.; Kuipers, J.A.M.

    2004-01-01

    Dense gas–particle flows are encountered in a variety of industrially important processes for large scale production of fuels, fertilizers and base chemicals. The scale-up of these processes is often problematic, which can be related to the intrinsic complexities of these flows which are

  19. Observation of the lymph flow in the lower extremities of edematous patients with noninvasive methods. RI-lymphography with a computer onlined gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Arai, Isao; Hirota, Akio; Watanabe, Sumio (Toho Univ., Tokyo (Japan). School of Medicine)

    1983-09-01

    An RI-lymphography with a computer onlined gamma camera was used for observing the lymph flow of edematous patients without any invasive procedures and for estimating the active movement of lymph vessels. Subjects were composed of 8 normal volunteers (group 1), 41 non-edematous patients (group 2) and 26 edematous patients (group 3). Four mCi of Tc-99m-HSA in a volume of 0.1 ml was injected subcutaneously in the pretibial region of the lower extremity, and immediately after the injection scintigram was recorded on the thigh every 5 sec. for 30 min. Results: 1) Normal volunteers; Time-activity curves showed a gradual increase in RI activity in relation to time without remarkable spike-like fluctuations. The maximum count attained was less than 200 cps in all experiments. 2) Non-edematous patients; In 46 out of 57 experiments (80.8%), the similar time-activity curves were observed as those of the normal volunteers. On the other hand, time-activity curves in 11 out of 57 (19.2%) showed a much steeper stepwise-increase simultaneously with remarkable spike-waves. The maximum count was over 200 cps in these cases. 3) Edematous patients; In 12 out of 35 experiments (34.3%), the maximum count was over 200 cps. In these edematous diseases other than lymphedema and hyperthyroidism, time-activity curves showed a rapid stepwise increase with a lot of spikes, and the maximum count was over 500 cps in 6 experiments. In 23 out of 35 (65.7%), the maximum count was less than 200 cps. In these cases, edema was attributable to secondary lymphedema, hypothyroidism, aging and so on. 4) Relationship between edema and lymph flow: When subjects were divided into 3 groups (non-edema, mild and severe edema), the maximum count 200 cps was observed in 16.7% in non-edema group, 45.8% in mild and 9.1% in severe edema group.

  20. Scale interactions in economics: application to the evaluation of the economic damages of climatic change and of extreme events; Interactions d'echelles en economie: application a l'evaluation des dommages economiques du changement climatique et des evenements extremes

    Energy Technology Data Exchange (ETDEWEB)

    Hallegatte, S

    2005-06-15

    Growth models, which neglect economic disequilibria, considered as temporary, are in general used to evaluate the damaging effects generated by climatic change. This work shows, through a series of modeling experiences, the importance of disequilibria and of endogenous variability of economy in the evaluation of damages due to extreme events and climatic change. It demonstrates the impossibility to separate the evaluation of damages from the representation of growth and of economic dynamics: the comfort losses will depend on both the nature and intensity of impacts and on the dynamics and situation of the economy to which they will apply. Thus, the uncertainties about the damaging effects of future climatic changes come from both scientific uncertainties and from uncertainties about the future organization of our economies. (J.S.)

  1. Application of computer-aided multi-scale modelling framework – Aerosol case study

    DEFF Research Database (Denmark)

    Heitzig, Martina; Sin, Gürkan; Glarborg, Peter

    2011-01-01

    Model-based computer aided product-process engineering has attained increased importance in a number of industries, including pharmaceuticals, petrochemicals, fine chemicals, polymers, biotechnology, food, energy and water. This trend is set to continue due to the substantial benefits computer-aided...... methods provide. The key prerequisite of computer-aided product-process engineering is however the availability of models of different types, forms and application modes. The development of the models required for the systems under investigation tends to be a challenging and time-consuming task involving...... numerous steps, expert skills and different modelling tools. This motivates the development of a computer-aided modelling framework that supports the user during model development, documentation, analysis, identification, application and re-use with the goal to increase the efficiency of the modelling...

  2. Scaling Watershed Models: Modern Approaches to Science Computation with MapReduce, Parallelization, and Cloud Optimization

    Science.gov (United States)

    Environmental models are products of the computer architecture and software tools available at the time of development. Scientifically sound algorithms may persist in their original state even as system architectures and software development approaches evolve and progress. Dating...

  3. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules F1-F8

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE

  4. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules F1-F8

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE.

  5. Multi-Scale Computational Enzymology: Enhancing Our Understanding of Enzymatic Catalysis

    OpenAIRE

    Rami Gherib; Hisham M. Dokainish; James W. Gauld

    2013-01-01

    Elucidating the origin of enzymatic catalysis stands as one the great challenges of contemporary biochemistry and biophysics. The recent emergence of computational enzymology has enhanced our atomistic-level description of biocatalysis as well the kinetic and thermodynamic properties of their mechanisms. There exists a diversity of computational methods allowing the investigation of specific enzymatic properties. Small or large density functional theory models allow the comparison of a pleth...

  6. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming.

    Science.gov (United States)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-08-01

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.

  7. Attitude extremity, consensus and diagnosticity

    NARCIS (Netherlands)

    van der Pligt, J.; Ester, P.; van der Linden, J.

    1983-01-01

    Studied the effects of attitude extremity on perceived consensus and willingness to ascribe trait terms to others with either pro- or antinuclear attitudes. 611 Ss rated their attitudes toward nuclear energy on a 5-point scale. Results show that attitude extremity affected consensus estimates. Trait

  8. Micro-computed tomography pore-scale study of flow in porous media: Effect of voxel resolution

    Science.gov (United States)

    Shah, S. M.; Gray, F.; Crawshaw, J. P.; Boek, E. S.

    2016-09-01

    A fundamental understanding of flow in porous media at the pore-scale is necessary to be able to upscale average displacement processes from core to reservoir scale. The study of fluid flow in porous media at the pore-scale consists of two key procedures: Imaging - reconstruction of three-dimensional (3D) pore space images; and modelling such as with single and two-phase flow simulations with Lattice-Boltzmann (LB) or Pore-Network (PN) Modelling. Here we analyse pore-scale results to predict petrophysical properties such as porosity, single-phase permeability and multi-phase properties at different length scales. The fundamental issue is to understand the image resolution dependency of transport properties, in order to up-scale the flow physics from pore to core scale. In this work, we use a high resolution micro-computed tomography (micro-CT) scanner to image and reconstruct three dimensional pore-scale images of five sandstones (Bentheimer, Berea, Clashach, Doddington and Stainton) and five complex carbonates (Ketton, Estaillades, Middle Eastern sample 3, Middle Eastern sample 5 and Indiana Limestone 1) at four different voxel resolutions (4.4 μm, 6.2 μm, 8.3 μm and 10.2 μm), scanning the same physical field of view. Implementing three phase segmentation (macro-pore phase, intermediate phase and grain phase) on pore-scale images helps to understand the importance of connected macro-porosity in the fluid flow for the samples studied. We then compute the petrophysical properties for all the samples using PN and LB simulations in order to study the influence of voxel resolution on petrophysical properties. We then introduce a numerical coarsening scheme which is used to coarsen a high voxel resolution image (4.4 μm) to lower resolutions (6.2 μm, 8.3 μm and 10.2 μm) and study the impact of coarsening data on macroscopic and multi-phase properties. Numerical coarsening of high resolution data is found to be superior to using a lower resolution scan because it

  9. Computational Study of Separation Control Using ZNMF Devices: Flow Physics and Scaling Laws

    National Research Council Canada - National Science Library

    Mittal, Rajat

    2008-01-01

    The primary objective of the proposed research was to gain a fundamental understanding of strategies, mechanisms, and scaling laws for successful control of separation using zern-net mass-flux (ZNMF) actuators...

  10. A Modified Rule of Thumb for Evaluating Scale Reproducibilities Determined by Electronic Computers

    Science.gov (United States)

    Hofmann, Richard J.

    1978-01-01

    The Goodenough technique for determining scale error is compared to the Guttman technique and demonstrated to be more conservative than the Guttman technique. Implications with regard to Guttman's evaluative rule of thumb for evaluating a reproducibility are noted. (Author)

  11. Interaural Level Difference Dependent Gain Control and Synaptic Scaling Underlying Binaural Computation

    Science.gov (United States)

    Xiong, Xiaorui R.; Liang, Feixue; Li, Haifu; Mesik, Lukas; Zhang, Ke K.; Polley, Daniel B.; Tao, Huizhong W.; Xiao, Zhongju; Zhang, Li I.

    2013-01-01

    Binaural integration in the central nucleus of inferior colliculus (ICC) plays a critical role in sound localization. However, its arithmetic nature and underlying synaptic mechanisms remain unclear. Here, we showed in mouse ICC neurons that the contralateral dominance is created by a “push-pull”-like mechanism, with contralaterally dominant excitation and more bilaterally balanced inhibition. Importantly, binaural spiking response is generated apparently from an ipsilaterally-mediated scaling of contralateral response, leaving frequency tuning unchanged. This scaling effect is attributed to a divisive attenuation of contralaterally-evoked synaptic excitation onto ICC neurons with their inhibition largely unaffected. Thus, a gain control mediates the linear transformation from monaural to binaural spike responses. The gain value is modulated by interaural level difference (ILD) primarily through scaling excitation to different levels. The ILD-dependent synaptic scaling and gain adjustment allow ICC neurons to dynamically encode interaural sound localization cues while maintaining an invariant representation of other independent sound attributes. PMID:23972599

  12. Critical exponents of extremal Kerr perturbations

    Science.gov (United States)

    Gralla, Samuel E.; Zimmerman, Peter

    2018-05-01

    We show that scalar, electromagnetic, and gravitational perturbations of extremal Kerr black holes are asymptotically self-similar under the near-horizon, late-time scaling symmetry of the background metric. This accounts for the Aretakis instability (growth of transverse derivatives) as a critical phenomenon associated with the emergent symmetry. We compute the critical exponent of each mode, which is equivalent to its decay rate. It follows from symmetry arguments that, despite the growth of transverse derivatives, all generally covariant scalar quantities decay to zero.

  13. Towards Better Computational Models of the Balance Scale Task: A Reply to Shultz and Takane

    Science.gov (United States)

    van der Maas, Han L. J.; Quinlan, Philip T.; Jansen, Brenda R. J.

    2007-01-01

    In contrast to Shultz and Takane [Shultz, T.R., & Takane, Y. (2007). Rule following and rule use in the balance-scale task. "Cognition", in press, doi:10.1016/j.cognition.2006.12.004.] we do not accept that the traditional Rule Assessment Method (RAM) of scoring responses on the balance scale task has advantages over latent class analysis (LCA):…

  14. An efficient and novel computation method for simulating diffraction patterns from large-scale coded apertures on large-scale focal plane arrays

    Science.gov (United States)

    Shrekenhamer, Abraham; Gottesman, Stephen R.

    2012-10-01

    A novel and memory efficient method for computing diffraction patterns produced on large-scale focal planes by largescale Coded Apertures at wavelengths where diffraction effects are significant has been developed and tested. The scheme, readily implementable on portable computers, overcomes the memory limitations of present state-of-the-art simulation codes such as Zemax. The method consists of first calculating a set of reference complex field (amplitude and phase) patterns on the focal plane produced by a single (reference) central hole, extending to twice the focal plane array size, with one such pattern for each Line-of-Sight (LOS) direction and wavelength in the scene, and with the pattern amplitude corresponding to the square-root of the spectral irradiance from each such LOS direction in the scene at selected wavelengths. Next the set of reference patterns is transformed to generate pattern sets for other holes. The transformation consists of a translational pattern shift corresponding to each hole's position offset and an electrical phase shift corresponding to each hole's position offset and incoming radiance's direction and wavelength. The set of complex patterns for each direction and wavelength is then summed coherently and squared for each detector to yield a set of power patterns unique for each direction and wavelength. Finally the set of power patterns is summed to produce the full waveband diffraction pattern from the scene. With this tool researchers can now efficiently simulate diffraction patterns produced from scenes by large-scale Coded Apertures onto large-scale focal plane arrays to support the development and optimization of coded aperture masks and image reconstruction algorithms.

  15. Computed tomographic angiography criteria in the diagnosis of brain death - comparison of sensitivity and interobserver reliability of different evaluation scales

    International Nuclear Information System (INIS)

    Sawicki, Marcin; Walecka, A.; Bohatyrewicz, R.; Solek-Pastuszka, J.; Safranow, K.; Walecki, J.; Rowinski, O.; Czajkowski, Z.; Guzinski, M.; Burzynska, M.; Wojczal, J.

    2014-01-01

    The standardized diagnostic criteria for computed tomographic angiography (CTA) in diagnosis of brain death (BD) are not yet established. The aim of the study was to compare the sensitivity and interobserver agreement of the three previously used scales of CTA for the diagnosis of BD. Eighty-two clinically brain-dead patients underwent CTA with a delay of 40 s after contrast injection. Catheter angiography was used as the reference standard. CTA results were assessed by two radiologists, and the diagnosis of BD was established according to 10-, 7-, and 4-point scales. Catheter angiography confirmed the diagnosis of BD in all cases. Opacification of certain cerebral vessels as indicator of BD was highly sensitive: cortical segments of the middle cerebral artery (96.3 %), the internal cerebral vein (98.8 %), and the great cerebral vein (98.8 %). Other vessels were less sensitive: the pericallosal artery (74.4 %), cortical segments of the posterior cerebral artery (79.3 %), and the basilar artery (82.9 %). The sensitivities of the 10-, 7-, and 4-point scales were 67.1, 74.4, and 96.3 %, respectively (p < 0.001). Percentage interobserver agreement in diagnosis of BD reached 93 % for the 10-point scale, 89 % for the 7-point scale, and 95 % for the 4-point scale (p = 0.37). In the application of CTA to the diagnosis of BD, reducing the assessment of vascular opacification scale from a 10- to a 4-point scale significantly increases the sensitivity and maintains high interobserver reliability. (orig.)

  16. Computed tomographic angiography criteria in the diagnosis of brain death - comparison of sensitivity and interobserver reliability of different evaluation scales

    Energy Technology Data Exchange (ETDEWEB)

    Sawicki, Marcin; Walecka, A. [Pomeranian Medical University, Department of Diagnostic Imaging and Interventional Radiology, Szczecin (Poland); Bohatyrewicz, R.; Solek-Pastuszka, J. [Pomeranian Medical University, Clinic of Anesthesiology and Intensive Care, Szczecin (Poland); Safranow, K. [Pomeranian Medical University, Department of Biochemistry and Medical Chemistry, Szczecin (Poland); Walecki, J. [The Centre of Postgraduate Medical Education, Warsaw (Poland); Rowinski, O. [Medical University of Warsaw, 2nd Department of Clinical Radiology, Warsaw (Poland); Czajkowski, Z. [Regional Joint Hospital, Szczecin (Poland); Guzinski, M. [Wroclaw Medical University, Department of General Radiology, Interventional Radiology and Neuroradiology, Wroclaw (Poland); Burzynska, M. [Wroclaw Medical University, Department of Anesthesiology and Intensive Therapy, Wroclaw (Poland); Wojczal, J. [Medical University of Lublin, Department of Neurology, Lublin (Poland)

    2014-08-15

    The standardized diagnostic criteria for computed tomographic angiography (CTA) in diagnosis of brain death (BD) are not yet established. The aim of the study was to compare the sensitivity and interobserver agreement of the three previously used scales of CTA for the diagnosis of BD. Eighty-two clinically brain-dead patients underwent CTA with a delay of 40 s after contrast injection. Catheter angiography was used as the reference standard. CTA results were assessed by two radiologists, and the diagnosis of BD was established according to 10-, 7-, and 4-point scales. Catheter angiography confirmed the diagnosis of BD in all cases. Opacification of certain cerebral vessels as indicator of BD was highly sensitive: cortical segments of the middle cerebral artery (96.3 %), the internal cerebral vein (98.8 %), and the great cerebral vein (98.8 %). Other vessels were less sensitive: the pericallosal artery (74.4 %), cortical segments of the posterior cerebral artery (79.3 %), and the basilar artery (82.9 %). The sensitivities of the 10-, 7-, and 4-point scales were 67.1, 74.4, and 96.3 %, respectively (p < 0.001). Percentage interobserver agreement in diagnosis of BD reached 93 % for the 10-point scale, 89 % for the 7-point scale, and 95 % for the 4-point scale (p = 0.37). In the application of CTA to the diagnosis of BD, reducing the assessment of vascular opacification scale from a 10- to a 4-point scale significantly increases the sensitivity and maintains high interobserver reliability. (orig.)

  17. Effects of body position and extension of the neck and extremities on lung volume measured via computed tomography in red-eared slider turtles (Trachemys scripta elegans).

    Science.gov (United States)

    Mans, Christoph; Drees, Randi; Sladky, Kurt K; Hatt, Jean-Michel; Kircher, Patrick R

    2013-10-15

    To determine the effects of body position and extension of the neck and extremities on CT measurements of ventilated lung volume in red-eared slider turtles (Trachemys scripta elegans). Prospective crossover-design study. 14 adult red-eared slider turtles. CT was performed on turtles in horizontal ventral recumbent and vertical left lateral recumbent, right lateral recumbent, and caudal recumbent body positions. In sedated turtles, evaluations were performed in horizontal ventral recumbent body position with and without extension of the neck and extremities. Lung volumes were estimated from helical CT images with commercial software. Effects of body position, extremity and neck extension, sedation, body weight, and sex on lung volume were analyzed. Mean ± SD volume of dependent lung tissue was significantly decreased in vertical left lateral (18.97 ± 14.65 mL), right lateral (24.59 ± 19.16 mL), and caudal (9.23 ± 12.13 mL) recumbent positions, compared with the same region for turtles in horizontal ventral recumbency (48.52 ± 20.08 mL, 50.66 ± 18.08 mL, and 31.95 ± 15.69 mL, respectively). Total lung volume did not differ among positions because of compensatory increases in nondependent lung tissue. Extension of the extremities and neck significantly increased total lung volume (127.94 ± 35.53 mL), compared with that in turtles with the head, neck, and extremities withdrawn into the shell (103.24 ± 40.13 mL). Vertical positioning of red-eared sliders significantly affected lung volumes and could potentially affect interpretation of radiographs obtained in these positions. Extension of the extremities and neck resulted in the greatest total lung volume.

  18. Application of cone beam computed tomography gray scale values in the diagnosis of cysts and tumors

    Directory of Open Access Journals (Sweden)

    Aarfa Nasim

    2018-01-01

    Full Text Available Background: Studies have unveiled that in CBCT the degree of x-ray attenuation is shown by gray scale (voxel value that is used in determining the pathologic lesion. Gray value is to assess the density or quality of bone and the density varies depending on radiation attenuation. CBCT gray values are considered approximate values and its measurement allows differentiation of soft tissue and fluid with that of hard tissue. Aim and Objective: We aimed to evaluate the application of CBCT gray scale value of cysts and tumors to assess the difference of bony changes and to determine the significance in diagnosing the contents of the lesions. Materials and Methods: The study was conducted in the department of Oral Medicine and Radiology. Patient clinically diagnosed either with cysts or tumors over a period of 18 months were included in the study. The gray scale reading was taken and radiological diagnosis was made which was further compared with the histopathological report of cysts and tumors. Results: CBCT gray scale value was found to be effective and superior to conventional radiographic tool and more useful in diagnosing the nature of cysts and tumors pre-operatively. Conclusion: CBCT gray value can be considered as a major tool in diagnosis of cyst and tumor and other soft or hard tissue lesion without any microscopic evaluation. CBCT gray scale measurement is superior to conventional intraoral radiographic methods for diagnosing the nature of lytic lesion of jaw.

  19. A Pilot-Scale Heat Recovery System for Computer Process Control Teaching and Research.

    Science.gov (United States)

    Callaghan, P. J.; And Others

    1988-01-01

    Describes the experimental system and equipment including an interface box for displaying variables. Discusses features which make the circuit suitable for teaching and research in computing. Feedforward, decoupling, and adaptive control, examination of digital filtering, and a cascade loop are teaching experiments utilizing this rig. Diagrams and…

  20. Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and its Application to Sparse Coding

    Directory of Open Access Journals (Sweden)

    Sapan eAgarwal

    2016-01-01

    Full Text Available The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational advantages of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an NxN crossbar, these two kernels are at a minimum O(N more energy efficient than a digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1. These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N reduction in energy for the entire algorithm. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.

  1. The cognitive dynamics of computer science cost-effective large scale software development

    CERN Document Server

    De Gyurky, Szabolcs Michael; John Wiley & Sons

    2006-01-01

    This book has three major objectives: To propose an ontology for computer software; To provide a methodology for development of large software systems to cost and schedule that is based on the ontology; To offer an alternative vision regarding the development of truly autonomous systems.

  2. Computed versus measured response of HDR reactor building in large scale shaking tests

    International Nuclear Information System (INIS)

    Werkle, H.; Waas, G.

    1987-01-01

    The earthquake resistant design of NPP structures and their installations is commonly based on linear analysis methods. Nonlinear effects, which may occur during strong earthquakes, are approximately accounted for in the analysis by adjusting the structural damping values. Experimental investigations of nonlinear effects were performed with an extremely heavy shaker at the decommissioned HDR reactor building in West Germany. The tests were directed by KfK (Nuclear Research Center Karlsruhe, West Germany) and supported by several companies and institutes from West Germany, Switzerland and the USA. The objective was the dynamic repsonse behaviour of the structure, piping and components to strong earthquake-like shaking including nonlinear effects. This paper presents some results of safety analyses and measurements, which were performed prior and during the test series. It was intended to shake the building up to a level where only a marginal safety against global structural failure was left

  3. Large Scale Document Inversion using a Multi-threaded Computing System

    Science.gov (United States)

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2018-01-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  4. Large Scale Document Inversion using a Multi-threaded Computing System.

    Science.gov (United States)

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2017-06-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  5. A Robust Computational Technique for Model Order Reduction of Two-Time-Scale Discrete Systems via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Othman M. K. Alsmadi

    2015-01-01

    Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  6. Computer system for the assessment of radiation situation in the cases of radiological accidents and extreme weather conditions in the Chernobyl exclusion zone

    Energy Technology Data Exchange (ETDEWEB)

    Talerko, M.; Garger, E.; Kuzmenko, A. [Institute for Safety Problems of Nuclear Power Plants (Ukraine)

    2014-07-01

    Radiation situation within the Chernobyl Exclusion Zone (ChEZ) is determined by high radionuclides contamination of the land surface formed after the 1986 accident, as well as the presence of a number of potentially hazardous objects (the 'Shelter' object, the Interim Spent Nuclear Fuel Dry Storage Facility ISF-1, radioactive waste disposal sites, radioactive waste temporary localization sites etc.). The air concentration of radionuclides over the ChEZ territory and radiation exposure of personnel are influenced by natural and anthropogenic factors: variable weather conditions, forest fires, construction and excavation activity etc. The comprehensive radiation monitoring and early warning system in the ChEZ was established under financial support of European Commission in 2011. It involves the computer system developed for assessment and prediction of radiological emergencies consequences in the ChEZ ensuring the protection of personnel and the population living near its borders. The system assesses radiation situation under both normal conditions in the ChEZ and radiological emergencies which result in considerable radionuclides emission into the air (accidents at radiation hazardous objects, extreme weather conditions). Three different types of radionuclides release sources can be considered in the software package. So it is based on a set of different models of emission, atmospheric transport and deposition of radionuclides: 1) mesoscale model of radionuclide atmospheric transport LEDI for calculations of the radionuclides emission from stacks and buildings; 2) model of atmospheric transport and deposition of radionuclides due to anthropogenic resuspension from contaminated area (area surface source model) as a result of construction and excavation activity, heavy traffic etc.; 3) model of resuspension, atmospheric transport and deposition of radionuclides during grassland and forest fires in the ChEZ. The system calculates the volume and surface

  7. SCALE: A modular code system for performing stand