WorldWideScience

Sample records for extreme scale computing

  1. Extreme Scale Computing for First-Principles Plasma Physics Research

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Choogn-Seock [Princeton University

    2011-10-12

    World superpowers are in the middle of the “Computnik” race. US Department of Energy (and National Nuclear Security Administration) wishes to launch exascale computer systems into the scientific (and national security) world by 2018. The objective is to solve important scientific problems and to predict the outcomes using the most fundamental scientific laws, which would not be possible otherwise. Being chosen into the next “frontier” group can be of great benefit to a scientific discipline. An extreme scale computer system requires different types of algorithms and programming philosophy from those we have been accustomed to. Only a handful of scientific codes are blessed to be capable of scalable usage of today’s largest computers in operation at petascale (using more than 100,000 cores concurrently). Fortunately, a few magnetic fusion codes are competing well in this race using the “first principles” gyrokinetic equations.These codes are beginning to study the fusion plasma dynamics in full-scale realistic diverted device geometry in natural nonlinear multiscale, including the large scale neoclassical and small scale turbulence physics, but excluding some ultra fast dynamics. In this talk, most of the above mentioned topics will be introduced at executive level. Representative properties of the extreme scale computers, modern programming exercises to take advantage of them, and different philosophies in the data flows and analyses will be presented. Examples of the multi-scale multi-physics scientific discoveries made possible by solving the gyrokinetic equations on extreme scale computers will be described. Future directions into “virtual tokamak experiments” will also be discussed.

  2. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Patrick [Oregon State Univ., Corvallis, OR (United States)

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  3. XVIS: Visualization for the Extreme-Scale Scientific-Computation Ecosystem Final Scientific/Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States); Maynard, Robert [Kitware, Inc., Clifton Park, NY (United States)

    2017-10-27

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. The XVis project brought together collaborators from predominant DOE projects for visualization on accelerators and combining their respective features into a new visualization toolkit called VTK-m.

  4. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY17.

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pugmire, David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rogers, David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Childs, Hank [Univ. of Oregon, Eugene, OR (United States); Ma, Kwan-Liu [Univ. of California, Davis, CA (United States); Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States)

    2017-10-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  5. Multi-level programming paradigm for extreme computing

    International Nuclear Information System (INIS)

    Petiton, S.; Sato, M.; Emad, N.; Calvin, C.; Tsuji, M.; Dandouna, M.

    2013-01-01

    In order to propose a framework and programming paradigms for post peta-scale computing, on the road to exa-scale computing and beyond, we introduced new languages, associated with a hierarchical multi-level programming paradigm, allowing scientific end-users and developers to program highly hierarchical architectures designed for extreme computing. In this paper, we explain the interest of such hierarchical multi-level programming paradigm for extreme computing and its well adaptation to several large computational science applications, such as for linear algebra solvers used for reactor core physic. We describe the YML language and framework allowing describing graphs of parallel components, which may be developed using PGAS-like language such as XMP, scheduled and computed on supercomputers. Then, we propose experimentations on supercomputers (such as the 'K' and 'Hooper' ones) of the hybrid method MERAM (Multiple Explicitly Restarted Arnoldi Method) as a case study for iterative methods manipulating sparse matrices, and the block Gauss-Jordan method as a case study for direct method manipulating dense matrices. We conclude proposing evolutions for this programming paradigm. (authors)

  6. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY15 Q4.

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sewell, Christopher [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Childs, Hank [Univ. of Oregon, Eugene, OR (United States); Ma, Kwan-Liu [Univ. of California, Davis, CA (United States); Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States); Meredith, Jeremy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  7. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Mid-year report FY17 Q2

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pugmire, David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rogers, David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Childs, Hank [Univ. of Oregon, Eugene, OR (United States); Ma, Kwan-Liu [Univ. of California, Davis, CA (United States); Geveci, Berk [Kitware Inc., Clifton Park, NY (United States)

    2017-05-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  8. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem. Mid-year report FY16 Q2

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D.; Sewell, Christopher (LANL); Childs, Hank (U of Oregon); Ma, Kwan-Liu (UC Davis); Geveci, Berk (Kitware); Meredith, Jeremy (ORNL)

    2016-05-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  9. Software challenges in extreme scale systems

    International Nuclear Information System (INIS)

    Sarkar, Vivek; Harrod, William; Snavely, Allan E

    2009-01-01

    Computer systems anticipated in the 2015 - 2020 timeframe are referred to as Extreme Scale because they will be built using massive multi-core processors with 100's of cores per chip. The largest capability Extreme Scale system is expected to deliver Exascale performance of the order of 10 18 operations per second. These systems pose new critical challenges for software in the areas of concurrency, energy efficiency and resiliency. In this paper, we discuss the implications of the concurrency and energy efficiency challenges on future software for Extreme Scale Systems. From an application viewpoint, the concurrency and energy challenges boil down to the ability to express and manage parallelism and locality by exploring a range of strong scaling and new-era weak scaling techniques. For expressing parallelism and locality, the key challenges are the ability to expose all of the intrinsic parallelism and locality in a programming model, while ensuring that this expression of parallelism and locality is portable across a range of systems. For managing parallelism and locality, the OS-related challenges include parallel scalability, spatial partitioning of OS and application functionality, direct hardware access for inter-processor communication, and asynchronous rather than interrupt-driven events, which are accompanied by runtime system challenges for scheduling, synchronization, memory management, communication, performance monitoring, and power management. We conclude by discussing the importance of software-hardware co-design in addressing the fundamental challenges for application enablement on Extreme Scale systems.

  10. Extreme-scale Algorithms and Solver Resilience

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States)

    2016-12-10

    A widening gap exists between the peak performance of high-performance computers and the performance achieved by complex applications running on these platforms. Over the next decade, extreme-scale systems will present major new challenges to algorithm development that could amplify this mismatch in such a way that it prevents the productive use of future DOE Leadership computers due to the following; Extreme levels of parallelism due to multicore processors; An increase in system fault rates requiring algorithms to be resilient beyond just checkpoint/restart; Complex memory hierarchies and costly data movement in both energy and performance; Heterogeneous system architectures (mixing CPUs, GPUs, etc.); and Conflicting goals of performance, resilience, and power requirements.

  11. Extreme-Scale Computing Project Aims to Advance Precision Oncology | FNLCR

    Science.gov (United States)

    Two government agencies and five national laboratories are collaborating to develop extremely high-performance computing capabilities that will analyze mountains of research and clinical data to improve scientific understanding of cancer, predict dru

  12. Extreme-Scale Computing Project Aims to Advance Precision Oncology | Poster

    Science.gov (United States)

    Two government agencies and five national laboratories are collaborating to develop extremely high-performance computing capabilities that will analyze mountains of research and clinical data to improve scientific understanding of cancer, predict drug response, and improve treatments for patients.

  13. Faster Parallel Traversal of Scale Free Graphs at Extreme Scale with Vertex Delegates

    KAUST Repository

    Pearce, Roger

    2014-11-01

    © 2014 IEEE. At extreme scale, irregularities in the structure of scale-free graphs such as social network graphs limit our ability to analyze these important and growing datasets. A key challenge is the presence of high-degree vertices (hubs), that leads to parallel workload and storage imbalances. The imbalances occur because existing partitioning techniques are not able to effectively partition high-degree vertices. We present techniques to distribute storage, computation, and communication of hubs for extreme scale graphs in distributed memory supercomputers. To balance the hub processing workload, we distribute hub data structures and related computation among a set of delegates. The delegates coordinate using highly optimized, yet portable, asynchronous broadcast and reduction operations. We demonstrate scalability of our new algorithmic technique using Breadth-First Search (BFS), Single Source Shortest Path (SSSP), K-Core Decomposition, and Page-Rank on synthetically generated scale-free graphs. Our results show excellent scalability on large scale-free graphs up to 131K cores of the IBM BG/P, and outperform the best known Graph500 performance on BG/P Intrepid by 15%

  14. Faster Parallel Traversal of Scale Free Graphs at Extreme Scale with Vertex Delegates

    KAUST Repository

    Pearce, Roger; Gokhale, Maya; Amato, Nancy M.

    2014-01-01

    © 2014 IEEE. At extreme scale, irregularities in the structure of scale-free graphs such as social network graphs limit our ability to analyze these important and growing datasets. A key challenge is the presence of high-degree vertices (hubs), that leads to parallel workload and storage imbalances. The imbalances occur because existing partitioning techniques are not able to effectively partition high-degree vertices. We present techniques to distribute storage, computation, and communication of hubs for extreme scale graphs in distributed memory supercomputers. To balance the hub processing workload, we distribute hub data structures and related computation among a set of delegates. The delegates coordinate using highly optimized, yet portable, asynchronous broadcast and reduction operations. We demonstrate scalability of our new algorithmic technique using Breadth-First Search (BFS), Single Source Shortest Path (SSSP), K-Core Decomposition, and Page-Rank on synthetically generated scale-free graphs. Our results show excellent scalability on large scale-free graphs up to 131K cores of the IBM BG/P, and outperform the best known Graph500 performance on BG/P Intrepid by 15%

  15. Gravo-Aeroelastic Scaling for Extreme-Scale Wind Turbines

    Energy Technology Data Exchange (ETDEWEB)

    Fingersh, Lee J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Loth, Eric [University of Virginia; Kaminski, Meghan [University of Virginia; Qin, Chao [University of Virginia; Griffith, D. Todd [Sandia National Laboratories

    2017-06-09

    A scaling methodology is described in the present paper for extreme-scale wind turbines (rated at 10 MW or more) that allow their sub-scale turbines to capture their key blade dynamics and aeroelastic deflections. For extreme-scale turbines, such deflections and dynamics can be substantial and are primarily driven by centrifugal, thrust and gravity forces as well as the net torque. Each of these are in turn a function of various wind conditions, including turbulence levels that cause shear, veer, and gust loads. The 13.2 MW rated SNL100-03 rotor design, having a blade length of 100-meters, is herein scaled to the CART3 wind turbine at NREL using 25% geometric scaling and blade mass and wind speed scaled by gravo-aeroelastic constraints. In order to mimic the ultralight structure on the advanced concept extreme-scale design the scaling results indicate that the gravo-aeroelastically scaled blades for the CART3 are be three times lighter and 25% longer than the current CART3 blades. A benefit of this scaling approach is that the scaled wind speeds needed for testing are reduced (in this case by a factor of two), allowing testing under extreme gust conditions to be much more easily achieved. Most importantly, this scaling approach can investigate extreme-scale concepts including dynamic behaviors and aeroelastic deflections (including flutter) at an extremely small fraction of the full-scale cost.

  16. Extreme-Scale Computing Project Aims to Advance Precision Oncology | FNLCR Staging

    Science.gov (United States)

    Two government agencies and five national laboratories are collaborating to develop extremely high-performance computing capabilities that will analyze mountains of research and clinical data to improve scientific understanding of cancer, predict dru

  17. Extreme Scale Computing to Secure the Nation

    Energy Technology Data Exchange (ETDEWEB)

    Brown, D L; McGraw, J R; Johnson, J R; Frincke, D

    2009-11-10

    absence of nuclear testing, a progam to: (1) Support a focused, multifaceted program to increase the understanding of the enduring stockpile; (2) Predict, detect, and evaluate potential problems of the aging of the stockpile; (3) Refurbish and re-manufacture weapons and components, as required; and (4) Maintain the science and engineering institutions needed to support the nation's nuclear deterrent, now and in the future'. This program continues to fulfill its national security mission by adding significant new capabilities for producing scientific results through large-scale computational simulation coupled with careful experimentation, including sub-critical nuclear experiments permitted under the CTBT. To develop the computational science and the computational horsepower needed to support its mission, SBSS initiated the Accelerated Strategic Computing Initiative, later renamed the Advanced Simulation & Computing (ASC) program (sidebar: 'History of ASC Computing Program Computing Capability'). The modern 3D computational simulation capability of the ASC program supports the assessment and certification of the current nuclear stockpile through calibration with past underground test (UGT) data. While an impressive accomplishment, continued evolution of national security mission requirements will demand computing resources at a significantly greater scale than we have today. In particular, continued observance and potential Senate confirmation of the Comprehensive Test Ban Treaty (CTBT) together with the U.S administration's promise for a significant reduction in the size of the stockpile and the inexorable aging and consequent refurbishment of the stockpile all demand increasing refinement of our computational simulation capabilities. Assessment of the present and future stockpile with increased confidence of the safety and reliability without reliance upon calibration with past or future test data is a long-term goal of the ASC program. This

  18. Extreme-Scale De Novo Genome Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Georganas, Evangelos [Intel Corporation, Santa Clara, CA (United States); Hofmeyr, Steven [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.; Egan, Rob [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Buluc, Aydin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.; Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.; Rokhsar, Daniel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Yelick, Katherine [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.

    2017-09-26

    De novo whole genome assembly reconstructs genomic sequence from short, overlapping, and potentially erroneous DNA segments and is one of the most important computations in modern genomics. This work presents HipMER, a high-quality end-to-end de novo assembler designed for extreme scale analysis, via efficient parallelization of the Meraculous code. Genome assembly software has many components, each of which stresses different components of a computer system. This chapter explains the computational challenges involved in each step of the HipMer pipeline, the key distributed data structures, and communication costs in detail. We present performance results of assembling the human genome and the large hexaploid wheat genome on large supercomputers up to tens of thousands of cores.

  19. Scientific Grand Challenges: Discovery In Basic Energy Sciences: The Role of Computing at the Extreme Scale - August 13-15, 2009, Washington, D.C.

    Energy Technology Data Exchange (ETDEWEB)

    Galli, Giulia [Univ. of California, Davis, CA (United States). Workshop Chair; Dunning, Thom [Univ. of Illinois, Urbana, IL (United States). Workshop Chair

    2009-08-13

    The U.S. Department of Energy’s (DOE) Office of Basic Energy Sciences (BES) and Office of Advanced Scientific Computing Research (ASCR) workshop in August 2009 on extreme-scale computing provided a forum for more than 130 researchers to explore the needs and opportunities that will arise due to expected dramatic advances in computing power over the next decade. This scientific community firmly believes that the development of advanced theoretical tools within chemistry, physics, and materials science—combined with the development of efficient computational techniques and algorithms—has the potential to revolutionize the discovery process for materials and molecules with desirable properties. Doing so is necessary to meet the energy and environmental challenges of the 21st century as described in various DOE BES Basic Research Needs reports. Furthermore, computational modeling and simulation are a crucial complement to experimental studies, particularly when quantum mechanical processes controlling energy production, transformations, and storage are not directly observable and/or controllable. Many processes related to the Earth’s climate and subsurface need better modeling capabilities at the molecular level, which will be enabled by extreme-scale computing.

  20. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  1. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    Energy Technology Data Exchange (ETDEWEB)

    Xiu, Dongbin [Univ. of Utah, Salt Lake City, UT (United States)

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  2. Scientific Grand Challenges: Challenges in Climate Change Science and the Role of Computing at the Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.; Johnson, Gary M.; Washington, Warren M.

    2009-07-02

    The U.S. Department of Energy (DOE) Office of Biological and Environmental Research (BER) in partnership with the Office of Advanced Scientific Computing Research (ASCR) held a workshop on the challenges in climate change science and the role of computing at the extreme scale, November 6-7, 2008, in Bethesda, Maryland. At the workshop, participants identified the scientific challenges facing the field of climate science and outlined the research directions of highest priority that should be pursued to meet these challenges. Representatives from the national and international climate change research community as well as representatives from the high-performance computing community attended the workshop. This group represented a broad mix of expertise. Of the 99 participants, 6 were from international institutions. Before the workshop, each of the four panels prepared a white paper, which provided the starting place for the workshop discussions. These four panels of workshop attendees devoted to their efforts the following themes: Model Development and Integrated Assessment; Algorithms and Computational Environment; Decadal Predictability and Prediction; Data, Visualization, and Computing Productivity. The recommendations of the panels are summarized in the body of this report.

  3. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  4. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  5. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Directory of Open Access Journals (Sweden)

    Jakob Jordan

    2018-02-01

    Full Text Available State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  6. Extreme Scale Computing Studies

    Science.gov (United States)

    2010-12-01

    systems that would fall under the Exascale rubric . In this chapter, we first discuss the attributes by which achievement of the label “Exascale” may be...Carrington, and E. Strohmaier. A Genetic Algorithms Approach to Modeling the Performance of Memory-bound Computations. Reno, NV, November 2007. ACM/IEEE... genetic stochasticity (random mating, mutation, etc). Outcomes are thus stochastic as well, and ecologists wish to ask questions like, “What is the

  7. Computational discovery of extremal microstructure families

    Science.gov (United States)

    Chen, Desai; Skouras, Mélina; Zhu, Bo; Matusik, Wojciech

    2018-01-01

    Modern fabrication techniques, such as additive manufacturing, can be used to create materials with complex custom internal structures. These engineered materials exhibit a much broader range of bulk properties than their base materials and are typically referred to as metamaterials or microstructures. Although metamaterials with extraordinary properties have many applications, designing them is very difficult and is generally done by hand. We propose a computational approach to discover families of microstructures with extremal macroscale properties automatically. Using efficient simulation and sampling techniques, we compute the space of mechanical properties covered by physically realizable microstructures. Our system then clusters microstructures with common topologies into families. Parameterized templates are eventually extracted from families to generate new microstructure designs. We demonstrate these capabilities on the computational design of mechanical metamaterials and present five auxetic microstructure families with extremal elastic material properties. Our study opens the way for the completely automated discovery of extremal microstructures across multiple domains of physics, including applications reliant on thermal, electrical, and magnetic properties. PMID:29376124

  8. Large Scale Meteorological Pattern of Extreme Rainfall in Indonesia

    Science.gov (United States)

    Kuswanto, Heri; Grotjahn, Richard; Rachmi, Arinda; Suhermi, Novri; Oktania, Erma; Wijaya, Yosep

    2014-05-01

    Extreme Weather Events (EWEs) cause negative impacts socially, economically, and environmentally. Considering these facts, forecasting EWEs is crucial work. Indonesia has been identified as being among the countries most vulnerable to the risk of natural disasters, such as floods, heat waves, and droughts. Current forecasting of extreme events in Indonesia is carried out by interpreting synoptic maps for several fields without taking into account the link between the observed events in the 'target' area with remote conditions. This situation may cause misidentification of the event leading to an inaccurate prediction. Grotjahn and Faure (2008) compute composite maps from extreme events (including heat waves and intense rainfall) to help forecasters identify such events in model output. The composite maps show large scale meteorological patterns (LSMP) that occurred during historical EWEs. Some vital information about the EWEs can be acquired from studying such maps, in addition to providing forecaster guidance. Such maps have robust mid-latitude meteorological patterns (for Sacramento and California Central Valley, USA EWEs). We study the performance of the composite approach for tropical weather condition such as Indonesia. Initially, the composite maps are developed to identify and forecast the extreme weather events in Indramayu district- West Java, the main producer of rice in Indonesia and contributes to about 60% of the national total rice production. Studying extreme weather events happening in Indramayu is important since EWEs there affect national agricultural and fisheries activities. During a recent EWE more than a thousand houses in Indramayu suffered from serious flooding with each home more than one meter underwater. The flood also destroyed a thousand hectares of rice plantings in 5 regencies. Identifying the dates of extreme events is one of the most important steps and has to be carried out carefully. An approach has been applied to identify the

  9. Effects of ergonomic intervention on work-related upper extremity musculoskeletal disorders among computer workers: a randomized controlled trial.

    Science.gov (United States)

    Esmaeilzadeh, Sina; Ozcan, Emel; Capan, Nalan

    2014-01-01

    The aim of the study was to determine effects of ergonomic intervention on work-related upper extremity musculoskeletal disorders (WUEMSDs) among computer workers. Four hundred computer workers answered a questionnaire on work-related upper extremity musculoskeletal symptoms (WUEMSS). Ninety-four subjects with WUEMSS using computers at least 3 h a day participated in a prospective, randomized controlled 6-month intervention. Body posture and workstation layouts were assessed by the Ergonomic Questionnaire. We used the Visual Analogue Scale to assess the intensity of WUEMSS. The Upper Extremity Function Scale was used to evaluate functional limitations at the neck and upper extremities. Health-related quality of life was assessed with the Short Form-36. After baseline assessment, those in the intervention group participated in a multicomponent ergonomic intervention program including a comprehensive ergonomic training consisting of two interactive sessions, an ergonomic training brochure, and workplace visits with workstation adjustments. Follow-up assessment was conducted after 6 months. In the intervention group, body posture (p 0.05). Ergonomic intervention programs may be effective in reducing ergonomic risk factors among computer workers and consequently in the secondary prevention of WUEMSDs.

  10. Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Biros, George [Univ. of Texas, Austin, TX (United States)

    2018-01-12

    Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. These include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a

  11. Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution

    Science.gov (United States)

    Rajulapati, C. R.; Mujumdar, P. P.

    2017-12-01

    Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.

  12. Improving the Performance of the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2014-01-01

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation-based toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1) a new deadlock resolution protocol to reduce the parallel discrete event simulation management overhead and (2) a new simulated MPI message matching algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement, such as by reducing the simulation overhead for running the NAS Parallel Benchmark suite inside the simulator from 1,020\\% to 238% for the conjugate gradient (CG) benchmark and from 102% to 0% for the embarrassingly parallel (EP) and benchmark, as well as, from 37,511% to 13,808% for CG and from 3,332% to 204% for EP with accurate process failure simulation.

  13. Extreme Scale FMM-Accelerated Boundary Integral Equation Solver for Wave Scattering

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed

    2018-03-27

    Algorithmic and architecture-oriented optimizations are essential for achieving performance worthy of anticipated energy-austere exascale systems. In this paper, we present an extreme scale FMM-accelerated boundary integral equation solver for wave scattering, which uses FMM as a matrix-vector multiplication inside the GMRES iterative method. Our FMM Helmholtz kernels treat nontrivial singular and near-field integration points. We implement highly optimized kernels for both shared and distributed memory, targeting emerging Intel extreme performance HPC architectures. We extract the potential thread- and data-level parallelism of the key Helmholtz kernels of FMM. Our application code is well optimized to exploit the AVX-512 SIMD units of Intel Skylake and Knights Landing architectures. We provide different performance models for tuning the task-based tree traversal implementation of FMM, and develop optimal architecture-specific and algorithm aware partitioning, load balancing, and communication reducing mechanisms to scale up to 6,144 compute nodes of a Cray XC40 with 196,608 hardware cores. With shared memory optimizations, we achieve roughly 77% of peak single precision floating point performance of a 56-core Skylake processor, and on average 60% of peak single precision floating point performance of a 72-core KNL. These numbers represent nearly 5.4x and 10x speedup on Skylake and KNL, respectively, compared to the baseline scalar code. With distributed memory optimizations, on the other hand, we report near-optimal efficiency in the weak scalability study with respect to both the logarithmic communication complexity as well as the theoretical scaling complexity of FMM. In addition, we exhibit up to 85% efficiency in strong scaling. We compute in excess of 2 billion DoF on the full-scale of the Cray XC40 supercomputer.

  14. Computational data sciences for assessment and prediction of climate extremes

    Science.gov (United States)

    Ganguly, A. R.

    2011-12-01

    Climate extremes may be defined inclusively as severe weather events or large shifts in global or regional weather patterns which may be caused or exacerbated by natural climate variability or climate change. This area of research arguably represents one of the largest knowledge-gaps in climate science which is relevant for informing resource managers and policy makers. While physics-based climate models are essential in view of non-stationary and nonlinear dynamical processes, their current pace of uncertainty reduction may not be adequate for urgent stakeholder needs. The structure of the models may in some cases preclude reduction of uncertainty for critical processes at scales or for the extremes of interest. On the other hand, methods based on complex networks, extreme value statistics, machine learning, and space-time data mining, have demonstrated significant promise to improve scientific understanding and generate enhanced predictions. When combined with conceptual process understanding at multiple spatiotemporal scales and designed to handle massive data, interdisciplinary data science methods and algorithms may complement or supplement physics-based models. Specific examples from the prior literature and our ongoing work suggests how data-guided improvements may be possible, for example, in the context of ocean meteorology, climate oscillators, teleconnections, and atmospheric process understanding, which in turn can improve projections of regional climate, precipitation extremes and tropical cyclones in an useful and interpretable fashion. A community-wide effort is motivated to develop and adapt computational data science tools for translating climate model simulations to information relevant for adaptation and policy, as well as for improving our scientific understanding of climate extremes from both observed and model-simulated data.

  15. Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Conrad, Patrick [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Bigoni, Daniele [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Parno, Matthew [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2017-06-09

    QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a history of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT

  16. Large-scale Meteorological Patterns Associated with Extreme Precipitation Events over Portland, OR

    Science.gov (United States)

    Aragon, C.; Loikith, P. C.; Lintner, B. R.; Pike, M.

    2017-12-01

    Extreme precipitation events can have profound impacts on human life and infrastructure, with broad implications across a range of stakeholders. Changes to extreme precipitation events are a projected outcome of climate change that warrants further study, especially at regional- to local-scales. While global climate models are generally capable of simulating mean climate at global-to-regional scales with reasonable skill, resiliency and adaptation decisions are made at local-scales where most state-of-the-art climate models are limited by coarse resolution. Characterization of large-scale meteorological patterns associated with extreme precipitation events at local-scales can provide climatic information without this scale limitation, thus facilitating stakeholder decision-making. This research will use synoptic climatology as a tool by which to characterize the key large-scale meteorological patterns associated with extreme precipitation events in the Portland, Oregon metro region. Composite analysis of meteorological patterns associated with extreme precipitation days, and associated watershed-specific flooding, is employed to enhance understanding of the climatic drivers behind such events. The self-organizing maps approach is then used to characterize the within-composite variability of the large-scale meteorological patterns associated with extreme precipitation events, allowing us to better understand the different types of meteorological conditions that lead to high-impact precipitation events and associated hydrologic impacts. A more comprehensive understanding of the meteorological drivers of extremes will aid in evaluation of the ability of climate models to capture key patterns associated with extreme precipitation over Portland and to better interpret projections of future climate at impact-relevant scales.

  17. On the nonlinearity of spatial scales in extreme weather attribution statements

    Science.gov (United States)

    Angélil, Oliver; Stone, Daíthí; Perkins-Kirkpatrick, Sarah; Alexander, Lisa V.; Wehner, Michael; Shiogama, Hideo; Wolski, Piotr; Ciavarella, Andrew; Christidis, Nikolaos

    2018-04-01

    In the context of ongoing climate change, extreme weather events are drawing increasing attention from the public and news media. A question often asked is how the likelihood of extremes might have changed by anthropogenic greenhouse-gas emissions. Answers to the question are strongly influenced by the model used, duration, spatial extent, and geographic location of the event—some of these factors often overlooked. Using output from four global climate models, we provide attribution statements characterised by a change in probability of occurrence due to anthropogenic greenhouse-gas emissions, for rainfall and temperature extremes occurring at seven discretised spatial scales and three temporal scales. An understanding of the sensitivity of attribution statements to a range of spatial and temporal scales of extremes allows for the scaling of attribution statements, rendering them relevant to other extremes having similar but non-identical characteristics. This is a procedure simple enough to approximate timely estimates of the anthropogenic contribution to the event probability. Furthermore, since real extremes do not have well-defined physical borders, scaling can help quantify uncertainty around attribution results due to uncertainty around the event definition. Results suggest that the sensitivity of attribution statements to spatial scale is similar across models and that the sensitivity of attribution statements to the model used is often greater than the sensitivity to a doubling or halving of the spatial scale of the event. The use of a range of spatial scales allows us to identify a nonlinear relationship between the spatial scale of the event studied and the attribution statement.

  18. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  19. Climatic forecast: down-scaling and extremes

    International Nuclear Information System (INIS)

    Deque, M.; Li, L.

    2007-01-01

    There is a strong demand for specifying the future climate at local scale and about extreme events. New methods, allowing a better output from the climate models, are currently being developed and French laboratories involved in the Escrime project are actively participating. (authors)

  20. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir

    2018-02-24

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  1. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Charara, Ali; Keyes, David E.

    2018-01-01

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  2. Durango: Scalable Synthetic Workload Generation for Extreme-Scale Application Performance Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Carothers, Christopher D. [Rensselaer Polytechnic Institute (RPI); Meredith, Jeremy S. [ORNL; Blanco, Marc [Rensselaer Polytechnic Institute (RPI); Vetter, Jeffrey S. [ORNL; Mubarak, Misbah [Argonne National Laboratory; LaPre, Justin [Rensselaer Polytechnic Institute (RPI); Moore, Shirley V. [ORNL

    2017-05-01

    Performance modeling of extreme-scale applications on accurate representations of potential architectures is critical for designing next generation supercomputing systems because it is impractical to construct prototype systems at scale with new network hardware in order to explore designs and policies. However, these simulations often rely on static application traces that can be difficult to work with because of their size and lack of flexibility to extend or scale up without rerunning the original application. To address this problem, we have created a new technique for generating scalable, flexible workloads from real applications, we have implemented a prototype, called Durango, that combines a proven analytical performance modeling language, Aspen, with the massively parallel HPC network modeling capabilities of the CODES framework.Our models are compact, parameterized and representative of real applications with computation events. They are not resource intensive to create and are portable across simulator environments. We demonstrate the utility of Durango by simulating the LULESH application in the CODES simulation environment on several topologies and show that Durango is practical to use for simulation without loss of fidelity, as quantified by simulation metrics. During our validation of Durango's generated communication model of LULESH, we found that the original LULESH miniapp code had a latent bug where the MPI_Waitall operation was used incorrectly. This finding underscores the potential need for a tool such as Durango, beyond its benefits for flexible workload generation and modeling.Additionally, we demonstrate the efficacy of Durango's direct integration approach, which links Aspen into CODES as part of the running network simulation model. Here, Aspen generates the application-level computation timing events, which in turn drive the start of a network communication phase. Results show that Durango's performance scales well when

  3. Extreme Physics and Informational/Computational Limits

    Energy Technology Data Exchange (ETDEWEB)

    Di Sia, Paolo, E-mail: paolo.disia@univr.it, E-mail: 10alla33@virgilio.it [Department of Computer Science, Faculty of Science, Verona University, Strada Le Grazie 15, I-37134 Verona (Italy) and Faculty of Computer Science, Free University of Bozen, Piazza Domenicani 3, I-39100 Bozen-Bolzano (Italy)

    2011-07-08

    A sector of the current theoretical physics, even called 'extreme physics', deals with topics concerning superstring theories, multiverse, quantum teleportation, negative energy, and more, that only few years ago were considered scientific imaginations or purely speculative physics. Present experimental lines of evidence and implications of cosmological observations seem on the contrary support such theories. These new physical developments lead to informational limits, as the quantity of information, that a physical system can record, and computational limits, resulting from considerations regarding black holes and space-time fluctuations. In this paper I consider important limits for information and computation resulting in particular from string theories and its foundations.

  4. Extreme Physics and Informational/Computational Limits

    International Nuclear Information System (INIS)

    Di Sia, Paolo

    2011-01-01

    A sector of the current theoretical physics, even called 'extreme physics', deals with topics concerning superstring theories, multiverse, quantum teleportation, negative energy, and more, that only few years ago were considered scientific imaginations or purely speculative physics. Present experimental lines of evidence and implications of cosmological observations seem on the contrary support such theories. These new physical developments lead to informational limits, as the quantity of information, that a physical system can record, and computational limits, resulting from considerations regarding black holes and space-time fluctuations. In this paper I consider important limits for information and computation resulting in particular from string theories and its foundations.

  5. A Large-Scale Multi-Hop Localization Algorithm Based on Regularized Extreme Learning for Wireless Networks.

    Science.gov (United States)

    Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan

    2017-12-20

    A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.

  6. ExM:System Support for Extreme-Scale, Many-Task Applications

    Energy Technology Data Exchange (ETDEWEB)

    Katz, Daniel S

    2011-05-31

    The ever-increasing power of supercomputer systems is both driving and enabling the emergence of new problem-solving methods that require the effi cient execution of many concurrent and interacting tasks. Methodologies such as rational design (e.g., in materials science), uncertainty quanti fication (e.g., in engineering), parameter estimation (e.g., for chemical and nuclear potential functions, and in economic energy systems modeling), massive dynamic graph pruning (e.g., in phylogenetic searches), Monte-Carlo- based iterative fi xing (e.g., in protein structure prediction), and inverse modeling (e.g., in reservoir simulation) all have these requirements. These many-task applications frequently have aggregate computing needs that demand the fastest computers. For example, proposed next-generation climate model ensemble studies will involve 1,000 or more runs, each requiring 10,000 cores for a week, to characterize model sensitivity to initial condition and parameter uncertainty. The goal of the ExM project is to achieve the technical advances required to execute such many-task applications efficiently, reliably, and easily on petascale and exascale computers. In this way, we will open up extreme-scale computing to new problem solving methods and application classes. In this document, we report on combined technical progress of the collaborative ExM project, and the institutional financial status of the portion of the project at University of Chicago, over the rst 8 months (through April 30, 2011)

  7. Temporal and spatial scaling impacts on extreme precipitation

    Science.gov (United States)

    Eggert, B.; Berg, P.; Haerter, J. O.; Jacob, D.; Moseley, C.

    2015-01-01

    Both in the current climate and in the light of climate change, understanding of the causes and risk of precipitation extremes is essential for protection of human life and adequate design of infrastructure. Precipitation extreme events depend qualitatively on the temporal and spatial scales at which they are measured, in part due to the distinct types of rain formation processes that dominate extremes at different scales. To capture these differences, we first filter large datasets of high-resolution radar measurements over Germany (5 min temporally and 1 km spatially) using synoptic cloud observations, to distinguish convective and stratiform rain events. In a second step, for each precipitation type, the observed data are aggregated over a sequence of time intervals and spatial areas. The resulting matrix allows a detailed investigation of the resolutions at which convective or stratiform events are expected to contribute most to the extremes. We analyze where the statistics of the two types differ and discuss at which resolutions transitions occur between dominance of either of the two precipitation types. We characterize the scales at which the convective or stratiform events will dominate the statistics. For both types, we further develop a mapping between pairs of spatially and temporally aggregated statistics. The resulting curve is relevant when deciding on data resolutions where statistical information in space and time is balanced. Our study may hence also serve as a practical guide for modelers, and for planning the space-time layout of measurement campaigns. We also describe a mapping between different pairs of resolutions, possibly relevant when working with mismatched model and observational resolutions, such as in statistical bias correction.

  8. Asynchronous schemes for CFD at extreme scales

    Science.gov (United States)

    Konduri, Aditya; Donzis, Diego

    2013-11-01

    Recent advances in computing hardware and software have made simulations an indispensable research tool in understanding fluid flow phenomena in complex conditions at great detail. Due to the nonlinear nature of the governing NS equations, simulations of high Re turbulent flows are computationally very expensive and demand for extreme levels of parallelism. Current large simulations are being done on hundreds of thousands of processing elements (PEs). Benchmarks from these simulations show that communication between PEs take a substantial amount of time, overwhelming the compute time, resulting in substantial waste in compute cycles as PEs remain idle. We investigate a novel approach based on widely used finite-difference schemes in which computations are carried out asynchronously, i.e. synchronization of data among PEs is not enforced and computations proceed regardless of the status of messages. This drastically reduces PE idle time and results in much larger computation rates. We show that while these schemes remain stable, their accuracy is significantly affected. We present new schemes that maintain accuracy under asynchronous conditions and provide a viable path towards exascale computing. Performance of these schemes will be shown for simple models like Burgers' equation.

  9. Using Discrete Event Simulation for Programming Model Exploration at Extreme-Scale: Macroscale Components for the Structural Simulation Toolkit (SST).

    Energy Technology Data Exchange (ETDEWEB)

    Wilke, Jeremiah J [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Kenny, Joseph P. [Sandia National Laboratories (SNL-CA), Livermore, CA (United States)

    2015-02-01

    Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading framework allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.

  10. Forcings and feedbacks on convection in the 2010 Pakistan flood: Modeling extreme precipitation with interactive large-scale ascent

    Science.gov (United States)

    Nie, Ji; Shaevitz, Daniel A.; Sobel, Adam H.

    2016-09-01

    Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. The causal relationships between these factors are often not obvious, however, the roles of different physical processes in producing the extreme precipitation event can be difficult to disentangle. Here we examine the large-scale forcings and convective heating feedback in the precipitation events, which caused the 2010 Pakistan flood within the Column Quasi-Geostrophic framework. A cloud-revolving model (CRM) is forced with large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation using input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. Numerical results show that the positive feedback of convective heating to large-scale dynamics is essential in amplifying the precipitation intensity to the observed values. Orographic lifting is the most important dynamic forcing in both events, while differential potential vorticity advection also contributes to the triggering of the first event. Horizontal moisture advection modulates the extreme events mainly by setting the environmental humidity, which modulates the amplitude of the convection's response to the dynamic forcings. When the CRM is replaced by either a single-column model (SCM) with parameterized convection or a dry model with a reduced effective static stability, the model results show substantial discrepancies compared with reanalysis data. The reasons for these discrepancies are examined, and the implications for global models and theoretical models are discussed.

  11. [Upper extremities, neck and back symptoms in office employees working at computer stations].

    Science.gov (United States)

    Zejda, Jan E; Bugajska, Joanna; Kowalska, Małgorzata; Krzych, Lukasz; Mieszkowska, Marzena; Brozek, Grzegorz; Braczkowska, Bogumiła

    2009-01-01

    To obtain current data on the occurrence ofwork-related symptoms of office computer users in Poland we implemented a questionnaire survey. Its goal was to assess the prevalence and intensity of symptoms of upper extremities, neck and back in office workers who use computers on a regular basis, and to find out if the occurrence of symptoms depends on the duration of computer use and other work-related factors. Office workers in two towns (Warszawa and Katowice), employed in large social services companies, were invited to fill in the Polish version of Nordic Questionnaire. The questions included work history and history of last-week symptoms of pain of hand/wrist, elbow, arm, neck and upper and lower back (occurrence and intensity measured by visual scale). Altogether 477 men and women returned the completed questionnaires. Between-group symptom differences (chi-square test) were verified by multivariate analysis (GLM). The prevalence of symptoms in individual body parts was as follows: neck, 55.6%; arm, 26.9%; elbow, 13.3%; wrist/hand, 29.9%; upper back, 49.6%; and lower back, 50.1%. Multivariate analysis confirmed the effect of gender, age and years of computer use on the occurrence of symptoms. Among other determinants, forearm support explained pain of wrist/hand, wrist support of elbow pain, and chair adjustment of arm pain. Association was also found between low back pain and chair adjustment and keyboard position. The findings revealed frequent occurrence of symptoms of pain in upper extremities and neck in office workers who use computers on a regular basis. Seating position could also contribute to the frequent occurrence of back pain in the examined population.

  12. Extreme-Scale Alignments Of Quasar Optical Polarizations And Galactic Dust Contamination

    Science.gov (United States)

    Pelgrims, Vincent

    2017-10-01

    Almost twenty years ago the optical polarization vectors from quasars were shown to be aligned over extreme-scales. That evidence was later confirmed and enhanced thanks to additional optical data obtained with the ESO instrument FORS2 mounted on the VLT, in Chile. These observations suggest either Galactic foreground contamination of the data or, more interestingly, a cosmological origin. Using 353-GHz polarization data from the Planck satellite, I recently showed that the main features of the extreme-scale alignments of the quasar optical polarization vectors are unaffected by the Galactic thermal dust. This confirms previous studies based on optical starlight polarization and discards the scenario of Galactic contamination. In this talk, I shall briefly review the extreme-scale quasar polarization alignments, discuss the main results submitted in A&A and motivate forthcoming projects at the frontier between Galactic and extragalactic astrop hysics.

  13. Censored rainfall modelling for estimation of fine-scale extremes

    Science.gov (United States)

    Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro

    2018-01-01

    Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.

  14. Data co-processing for extreme scale analysis level II ASC milestone (4745).

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, David; Moreland, Kenneth D.; Oldfield, Ron A.; Fabian, Nathan D.

    2013-03-01

    Exascale supercomputing will embody many revolutionary changes in the hardware and software of high-performance computing. A particularly pressing issue is gaining insight into the science behind the exascale computations. Power and I/O speed con- straints will fundamentally change current visualization and analysis work ows. A traditional post-processing work ow involves storing simulation results to disk and later retrieving them for visualization and data analysis. However, at exascale, scien- tists and analysts will need a range of options for moving data to persistent storage, as the current o ine or post-processing pipelines will not be able to capture the data necessary for data analysis of these extreme scale simulations. This Milestone explores two alternate work ows, characterized as in situ and in transit, and compares them. We nd each to have its own merits and faults, and we provide information to help pick the best option for a particular use.

  15. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  16. Standing Together for Reproducibility in Large-Scale Computing: Report on reproducibility@XSEDE

    OpenAIRE

    James, Doug; Wilkins-Diehr, Nancy; Stodden, Victoria; Colbry, Dirk; Rosales, Carlos; Fahey, Mark; Shi, Justin; Silva, Rafael F.; Lee, Kyo; Roskies, Ralph; Loewe, Laurence; Lindsey, Susan; Kooper, Rob; Barba, Lorena; Bailey, David

    2014-01-01

    This is the final report on reproducibility@xsede, a one-day workshop held in conjunction with XSEDE14, the annual conference of the Extreme Science and Engineering Discovery Environment (XSEDE). The workshop's discussion-oriented agenda focused on reproducibility in large-scale computational research. Two important themes capture the spirit of the workshop submissions and discussions: (1) organizational stakeholders, especially supercomputer centers, are in a unique position to promote, enab...

  17. Enabling Extreme Scale Earth Science Applications at the Oak Ridge Leadership Computing Facility

    Science.gov (United States)

    Anantharaj, V. G.; Mozdzynski, G.; Hamrud, M.; Deconinck, W.; Smith, L.; Hack, J.

    2014-12-01

    The Oak Ridge Leadership Facility (OLCF), established at the Oak Ridge National Laboratory (ORNL) under the auspices of the U.S. Department of Energy (DOE), welcomes investigators from universities, government agencies, national laboratories and industry who are prepared to perform breakthrough research across a broad domain of scientific disciplines, including earth and space sciences. Titan, the OLCF flagship system, is currently listed as #2 in the Top500 list of supercomputers in the world, and the largest available for open science. The computational resources are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, sponsored by the U.S. DOE Office of Science. In 2014, over 2.25 billion core hours on Titan were awarded via INCITE projects., including 14% of the allocation toward earth sciences. The INCITE competition is also open to research scientists based outside the USA. In fact, international research projects account for 12% of the INCITE awards in 2014. The INCITE scientific review panel also includes 20% participation from international experts. Recent accomplishments in earth sciences at OLCF include the world's first continuous simulation of 21,000 years of earth's climate history (2009); and an unprecedented simulation of a magnitude 8 earthquake over 125 sq. miles. One of the ongoing international projects involves scaling the ECMWF Integrated Forecasting System (IFS) model to over 200K cores of Titan. ECMWF is a partner in the EU funded Collaborative Research into Exascale Systemware, Tools and Applications (CRESTA) project. The significance of the research carried out within this project is the demonstration of techniques required to scale current generation Petascale capable simulation codes towards the performance levels required for running on future Exascale systems. One of the techniques pursued by ECMWF is to use Fortran2008 coarrays to overlap computations and communications and

  18. Investigating the Scaling Properties of Extreme Rainfall Depth ...

    African Journals Online (AJOL)

    Investigating the Scaling Properties of Extreme Rainfall Depth Series in Oromia Regional State, Ethiopia. ... Science, Technology and Arts Research Journal ... for storm duration ranging from 0.5 to 24 hr observed at network of rain gauges sited in Oromia regional state were analyzed using an approach based on moments.

  19. United States Temperature and Precipitation Extremes: Phenomenology, Large-Scale Organization, Physical Mechanisms and Model Representation

    Science.gov (United States)

    Black, R. X.

    2017-12-01

    We summarize results from a project focusing on regional temperature and precipitation extremes over the continental United States. Our project introduces a new framework for evaluating these extremes emphasizing their (a) large-scale organization, (b) underlying physical sources (including remote-excitation and scale-interaction) and (c) representation in climate models. Results to be reported include the synoptic-dynamic behavior, seasonality and secular variability of cold waves, dry spells and heavy rainfall events in the observational record. We also study how the characteristics of such extremes are systematically related to Northern Hemisphere planetary wave structures and thus planetary- and hemispheric-scale forcing (e.g., those associated with major El Nino events and Arctic sea ice change). The underlying physics of event onset are diagnostically quantified for different categories of events. Finally, the representation of these extremes in historical coupled climate model simulations is studied and the origins of model biases are traced using new metrics designed to assess the large-scale atmospheric forcing of local extremes.

  20. Understanding convective extreme precipitation scaling using observations and an entraining plume model

    NARCIS (Netherlands)

    Loriaux, J.M.; Lenderink, G.; De Roode, S.R.; Siebesma, A.P.

    2013-01-01

    Previously observed twice-Clausius–Clapeyron (2CC) scaling for extreme precipitation at hourly time scales has led to discussions about its origin. The robustness of this scaling is assessed by analyzing a subhourly dataset of 10-min resolution over the Netherlands. The results confirm the validity

  1. Domain Decomposition for Computing Extremely Low Frequency Induced Current in the Human Body

    OpenAIRE

    Perrussel , Ronan; Voyer , Damien; Nicolas , Laurent; Scorretti , Riccardo; Burais , Noël

    2011-01-01

    International audience; Computation of electromagnetic fields in high resolution computational phantoms requires solving large linear systems. We present an application of Schwarz preconditioners with Krylov subspace methods for computing extremely low frequency induced fields in a phantom issued from the Visible Human.

  2. A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States)

    2014-11-01

    The evolution of the computing world from teraflop to petaflop has been relatively effortless, with several of the existing programming models scaling effectively to the petascale. The migration to exascale, however, poses considerable challenges. All industry trends infer that the exascale machine will be built using processors containing hundreds to thousands of cores per chip. It can be inferred that efficient concurrency on exascale machines requires a massive amount of concurrent threads, each performing many operations on a localized piece of data. Currently, visualization libraries and applications are based off what is known as the visualization pipeline. In the pipeline model, algorithms are encapsulated as filters with inputs and outputs. These filters are connected by setting the output of one component to the input of another. Parallelism in the visualization pipeline is achieved by replicating the pipeline for each processing thread. This works well for today’s distributed memory parallel computers but cannot be sustained when operating on processors with thousands of cores. Our project investigates a new visualization framework designed to exhibit the pervasive parallelism necessary for extreme scale machines. Our framework achieves this by defining algorithms in terms of worklets, which are localized stateless operations. Worklets are atomic operations that execute when invoked unlike filters, which execute when a pipeline request occurs. The worklet design allows execution on a massive amount of lightweight threads with minimal overhead. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale machine.

  3. Enabling Structured Exploration of Workflow Performance Variability in Extreme-Scale Environments

    Energy Technology Data Exchange (ETDEWEB)

    Kleese van Dam, Kerstin; Stephan, Eric G.; Raju, Bibi; Altintas, Ilkay; Elsethagen, Todd O.; Krishnamoorthy, Sriram

    2015-11-15

    Workflows are taking an Workflows are taking an increasingly important role in orchestrating complex scientific processes in extreme scale and highly heterogeneous environments. However, to date we cannot reliably predict, understand, and optimize workflow performance. Sources of performance variability and in particular the interdependencies of workflow design, execution environment and system architecture are not well understood. While there is a rich portfolio of tools for performance analysis, modeling and prediction for single applications in homogenous computing environments, these are not applicable to workflows, due to the number and heterogeneity of the involved workflow and system components and their strong interdependencies. In this paper, we investigate workflow performance goals and identify factors that could have a relevant impact. Based on our analysis, we propose a new workflow performance provenance ontology, the Open Provenance Model-based WorkFlow Performance Provenance, or OPM-WFPP, that will enable the empirical study of workflow performance characteristics and variability including complex source attribution.

  4. Changes and Attribution of Extreme Precipitation in Climate Models: Subdaily and Daily Scales

    Science.gov (United States)

    Zhang, W.; Villarini, G.; Scoccimarro, E.; Vecchi, G. A.

    2017-12-01

    Extreme precipitation events are responsible for numerous hazards, including flooding, soil erosion, and landslides. Because of their significant socio-economic impacts, the attribution and projection of these events is of crucial importance to improve our response, mitigation and adaptation strategies. Here we present results from our ongoing work.In terms of attribution, we use idealized experiments [pre-industrial control experiment (PI) and 1% per year increase (1%CO2) in atmospheric CO2] from ten general circulation models produced under the Coupled Model Intercomparison Project Phase 5 (CMIP5) and the fraction of attributable risk to examine the CO2 effects on extreme precipitation at the sub-daily and daily scales. We find that the increased CO2 concentration substantially increases the odds of the occurrence of sub-daily precipitation extremes compared to the daily scale in most areas of the world, with the exception of some regions in the sub-tropics, likely in relation to the subsidence of the Hadley Cell. These results point to the large role that atmospheric CO2 plays in extreme precipitation under an idealized framework. Furthermore, we investigate the changes in extreme precipitation events with the Community Earth System Model (CESM) climate experiments using the scenarios consistent with the 1.5°C and 2°C temperature targets. We find that the frequency of annual extreme precipitation at a global scale increases in both 1.5°C and 2°C scenarios until around 2070, after which the magnitudes of the trend become much weaker or even negative. Overall, the frequency of global annual extreme precipitation is similar between 1.5°C and 2°C for the period 2006-2035, and the changes in extreme precipitation in individual seasons are consistent with those for the entire year. The frequency of extreme precipitation in the 2°C experiments is higher than for the 1.5°C experiment after the late 2030s, particularly for the period 2071-2100.

  5. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  6. Visualization and parallel I/O at extreme scale

    International Nuclear Information System (INIS)

    Ross, R B; Peterka, T; Shen, H-W; Hong, Y; Ma, K-L; Yu, H; Moreland, K

    2008-01-01

    In our efforts to solve ever more challenging problems through computational techniques, the scale of our compute systems continues to grow. As we approach petascale, it becomes increasingly important that all the resources in the system be used as efficiently as possible, not just the floating-point units. Because of hardware, software, and usability challenges, storage resources are often one of the most poorly used and performing components of today's compute systems. This situation can be especially true in the case of the analysis phases of scientific workflows. In this paper we discuss the impact of large-scale data on visual analysis operations and examine a collection of approaches to I/O in the visual analysis process. First we examine the performance of volume rendering on a leadership-computing platform and assess the relative cost of I/O, rendering, and compositing operations. Next we analyze the performance implications of eliminating preprocessing from this example workflow. Then we describe a technique that uses data reorganization to improve access times for data-intensive volume rendering

  7. Exascale Co-design for Modeling Materials in Extreme Environments

    Energy Technology Data Exchange (ETDEWEB)

    Germann, Timothy C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-07-08

    Computational materials science has provided great insight into the response of materials under extreme conditions that are difficult to probe experimentally. For example, shock-induced plasticity and phase transformation processes in single-crystal and nanocrystalline metals have been widely studied via large-scale molecular dynamics simulations, and many of these predictions are beginning to be tested at advanced 4th generation light sources such as the Advanced Photon Source (APS) and Linac Coherent Light Source (LCLS). I will describe our simulation predictions and their recent verification at LCLS, outstanding challenges in modeling the response of materials to extreme mechanical and radiation environments, and our efforts to tackle these as part of the multi-institutional, multi-disciplinary Exascale Co-design Center for Materials in Extreme Environments (ExMatEx). ExMatEx has initiated an early and deep collaboration between domain (computational materials) scientists, applied mathematicians, computer scientists, and hardware architects, in order to establish the relationships between algorithms, software stacks, and architectures needed to enable exascale-ready materials science application codes within the next decade. We anticipate that we will be able to exploit hierarchical, heterogeneous architectures to achieve more realistic large-scale simulations with adaptive physics refinement, and are using tractable application scale-bridging proxy application testbeds to assess new approaches and requirements. Such current scale-bridging strategies accumulate (or recompute) a distributed response database from fine-scale calculations, in a top-down rather than bottom-up multiscale approach.

  8. Identification of large-scale meteorological patterns associated with extreme precipitation in the US northeast

    Science.gov (United States)

    Agel, Laurie; Barlow, Mathew; Feldstein, Steven B.; Gutowski, William J.

    2018-03-01

    Patterns of daily large-scale circulation associated with Northeast US extreme precipitation are identified using both k-means clustering (KMC) and Self-Organizing Maps (SOM) applied to tropopause height. The tropopause height provides a compact representation of the upper-tropospheric potential vorticity, which is closely related to the overall evolution and intensity of weather systems. Extreme precipitation is defined as the top 1% of daily wet-day observations at 35 Northeast stations, 1979-2008. KMC is applied on extreme precipitation days only, while the SOM algorithm is applied to all days in order to place the extreme results into the overall context of patterns for all days. Six tropopause patterns are identified through KMC for extreme day precipitation: a summertime tropopause ridge, a summertime shallow trough/ridge, a summertime shallow eastern US trough, a deeper wintertime eastern US trough, and two versions of a deep cold-weather trough located across the east-central US. Thirty SOM patterns for all days are identified. Results for all days show that 6 SOM patterns account for almost half of the extreme days, although extreme precipitation occurs in all SOM patterns. The same SOM patterns associated with extreme precipitation also routinely produce non-extreme precipitation; however, on extreme precipitation days the troughs, on average, are deeper and the downstream ridges more pronounced. Analysis of other fields associated with the large-scale patterns show various degrees of anomalously strong moisture transport preceding, and upward motion during, extreme precipitation events.

  9. Extreme daily precipitation in Western Europe with climate change at appropriate spatial scales

    NARCIS (Netherlands)

    Booij, Martijn J.

    2002-01-01

    Extreme daily precipitation for the current and changed climate at appropriate spatial scales is assessed. This is done in the context of the impact of climate change on flooding in the river Meuse in Western Europe. The objective is achieved by determining and comparing extreme precipitation from

  10. Scaling of precipitation extremes with temperature in the French Mediterranean region: What explains the hook shape?

    Science.gov (United States)

    Drobinski, P.; Alonzo, B.; Bastin, S.; Silva, N. Da; Muller, C.

    2016-04-01

    Expected changes to future extreme precipitation remain a key uncertainty associated with anthropogenic climate change. Extreme precipitation has been proposed to scale with the precipitable water content in the atmosphere. Assuming constant relative humidity, this implies an increase of precipitation extremes at a rate of about 7% °C-1 globally as indicated by the Clausius-Clapeyron relationship. Increases faster and slower than Clausius-Clapeyron have also been reported. In this work, we examine the scaling between precipitation extremes and temperature in the present climate using simulations and measurements from surface weather stations collected in the frame of the HyMeX and MED-CORDEX programs in Southern France. Of particular interest are departures from the Clausius-Clapeyron thermodynamic expectation, their spatial and temporal distribution, and their origin. Looking at the scaling of precipitation extreme with temperature, two regimes emerge which form a hook shape: one at low temperatures (cooler than around 15°C) with rates of increase close to the Clausius-Clapeyron rate and one at high temperatures (warmer than about 15°C) with sub-Clausius-Clapeyron rates and most often negative rates. On average, the region of focus does not seem to exhibit super Clausius-Clapeyron behavior except at some stations, in contrast to earlier studies. Many factors can contribute to departure from Clausius-Clapeyron scaling: time and spatial averaging, choice of scaling temperature (surface versus condensation level), and precipitation efficiency and vertical velocity in updrafts that are not necessarily constant with temperature. But most importantly, the dynamical contribution of orography to precipitation in the fall over this area during the so-called "Cevenoles" events, explains the hook shape of the scaling of precipitation extremes.

  11. Computer work and musculoskeletal disorders of the neck and upper extremity: A systematic review

    Directory of Open Access Journals (Sweden)

    Veiersted Kaj Bo

    2010-04-01

    Full Text Available Abstract Background This review examines the evidence for an association between computer work and neck and upper extremity disorders (except carpal tunnel syndrome. Methods A systematic critical review of studies of computer work and musculoskeletal disorders verified by a physical examination was performed. Results A total of 22 studies (26 articles fulfilled the inclusion criteria. Results show limited evidence for a causal relationship between computer work per se, computer mouse and keyboard time related to a diagnosis of wrist tendonitis, and for an association between computer mouse time and forearm disorders. Limited evidence was also found for a causal relationship between computer work per se and computer mouse time related to tension neck syndrome, but the evidence for keyboard time was insufficient. Insufficient evidence was found for an association between other musculoskeletal diagnoses of the neck and upper extremities, including shoulder tendonitis and epicondylitis, and any aspect of computer work. Conclusions There is limited epidemiological evidence for an association between aspects of computer work and some of the clinical diagnoses studied. None of the evidence was considered as moderate or strong and there is a need for more and better documentation.

  12. Frameworks for visualization at the extreme scale

    International Nuclear Information System (INIS)

    Joy, Kenneth I; Miller, Mark; Childs, Hank; Bethel, E Wes; Clyne, John; Ostrouchov, George; Ahern, Sean

    2007-01-01

    The challenges of visualization at the extreme scale involve issues of scale, complexity, temporal exploration and uncertainty. The Visualization and Analytics Center for Enabling Technologies (VACET) focuses on leveraging scientific visualization and analytics software technology as an enabling technology to increased scientific discovery and insight. In this paper, we introduce new uses of visualization frameworks through the introduction of Equivalence Class Functions (ECFs). These functions give a new class of derived quantities designed to greatly expand the ability of the end user to explore and visualize data. ECFs are defined over equivalence classes (i.e., groupings) of elements from an original mesh, and produce summary values for the classes as output. ECFs can be used in the visualization process to directly analyze data, or can be used to synthesize new derived quantities on the original mesh. The design of ECFs enable a parallel implementation that allows the use of these techniques on massive data sets that require parallel processing

  13. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  14. The Relationship between Spatial and Temporal Magnitude Estimation of Scientific Concepts at Extreme Scales

    Science.gov (United States)

    Price, Aaron; Lee, H.

    2010-01-01

    Many astronomical objects, processes, and events exist and occur at extreme scales of spatial and temporal magnitudes. Our research draws upon the psychological literature, replete with evidence of linguistic and metaphorical links between the spatial and temporal domains, to compare how students estimate spatial and temporal magnitudes associated with objects and processes typically taught in science class.. We administered spatial and temporal scale estimation tests, with many astronomical items, to 417 students enrolled in 12 undergraduate science courses. Results show that while the temporal test was more difficult, students’ overall performance patterns between the two tests were mostly similar. However, asymmetrical correlations between the two tests indicate that students think of the extreme ranges of spatial and temporal scales in different ways, which is likely influenced by their classroom experience. When making incorrect estimations, students tended to underestimate the difference between the everyday scale and the extreme scales on both tests. This suggests the use of a common logarithmic mental number line for both spatial and temporal magnitude estimation. However, there are differences between the two tests in the errors student make in the everyday range. Among the implications discussed is the use of spatio-temporal reference frames, instead of smooth bootstrapping, to help students maneuver between scales of magnitude and the use of logarithmic transformations between reference frames. Implications for astronomy range from learning about spectra to large scale galaxy structure.

  15. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohr, Bernd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schulz, Martin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pasccci, Valerio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brunst, Holger [Dresden Univ. of Technology (Germany)

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  16. Large-Scale Atmospheric Circulation Patterns Associated with Temperature Extremes as a Basis for Model Evaluation: Methodological Overview and Results

    Science.gov (United States)

    Loikith, P. C.; Broccoli, A. J.; Waliser, D. E.; Lintner, B. R.; Neelin, J. D.

    2015-12-01

    Anomalous large-scale circulation patterns often play a key role in the occurrence of temperature extremes. For example, large-scale circulation can drive horizontal temperature advection or influence local processes that lead to extreme temperatures, such as by inhibiting moderating sea breezes, promoting downslope adiabatic warming, and affecting the development of cloud cover. Additionally, large-scale circulation can influence the shape of temperature distribution tails, with important implications for the magnitude of future changes in extremes. As a result of the prominent role these patterns play in the occurrence and character of extremes, the way in which temperature extremes change in the future will be highly influenced by if and how these patterns change. It is therefore critical to identify and understand the key patterns associated with extremes at local to regional scales in the current climate and to use this foundation as a target for climate model validation. This presentation provides an overview of recent and ongoing work aimed at developing and applying novel approaches to identifying and describing the large-scale circulation patterns associated with temperature extremes in observations and using this foundation to evaluate state-of-the-art global and regional climate models. Emphasis is given to anomalies in sea level pressure and 500 hPa geopotential height over North America using several methods to identify circulation patterns, including self-organizing maps and composite analysis. Overall, evaluation results suggest that models are able to reproduce observed patterns associated with temperature extremes with reasonable fidelity in many cases. Model skill is often highest when and where synoptic-scale processes are the dominant mechanisms for extremes, and lower where sub-grid scale processes (such as those related to topography) are important. Where model skill in reproducing these patterns is high, it can be inferred that extremes are

  17. Quantum universe on extremely small space-time scales

    International Nuclear Information System (INIS)

    Kuzmichev, V.E.; Kuzmichev, V.V.

    2010-01-01

    The semiclassical approach to the quantum geometrodynamical model is used for the description of the properties of the Universe on extremely small space-time scales. Under this approach, the matter in the Universe has two components of the quantum nature which behave as antigravitating fluids. The first component does not vanish in the limit h → 0 and can be associated with dark energy. The second component is described by an extremely rigid equation of state and goes to zero after the transition to large spacetime scales. On small space-time scales, this quantum correction turns out to be significant. It determines the geometry of the Universe near the initial cosmological singularity point. This geometry is conformal to a unit four-sphere embedded in a five-dimensional Euclidean flat space. During the consequent expansion of the Universe, when reaching the post-Planck era, the geometry of the Universe changes into that conformal to a unit four-hyperboloid in a five-dimensional Lorentzsignatured flat space. This agrees with the hypothesis about the possible change of geometry after the origin of the expanding Universe from the region near the initial singularity point. The origin of the Universe can be interpreted as a quantum transition of the system from a region in the phase space forbidden for the classical motion, but where a trajectory in imaginary time exists, into a region, where the equations of motion have the solution which describes the evolution of the Universe in real time. Near the boundary between two regions, from the side of real time, the Universe undergoes almost an exponential expansion which passes smoothly into the expansion under the action of radiation dominating over matter which is described by the standard cosmological model.

  18. Computing the distribution of return levels of extreme warm temperatures for future climate projections

    Energy Technology Data Exchange (ETDEWEB)

    Pausader, M.; Parey, S.; Nogaj, M. [EDF/R and D, Chatou Cedex (France); Bernie, D. [Met Office Hadley Centre, Exeter (United Kingdom)

    2012-03-15

    In order to take into account uncertainties in the future climate projections there is a growing demand for probabilistic projections of climate change. This paper presents a methodology for producing such a probabilistic analysis of future temperature extremes. The 20- and 100-years return levels are obtained from that of the normalized variable and the changes in mean and standard deviation given by climate models for the desired future periods. Uncertainty in future change of these extremes is quantified using a multi-model ensemble and a perturbed physics ensemble. The probability density functions of future return levels are computed at a representative location from the joint probability distribution of mean and standard deviation changes given by the two combined ensembles of models. For the studied location, the 100-years return level at the end of the century is lower than 41 C with an 80% confidence. Then, as the number of model simulations is low to compute a reliable distribution, two techniques proposed in the literature (local pattern scaling and ANOVA) have been used to infer the changes in mean and standard deviation for the combinations of RCM and GCM which have not been run. The ANOVA technique leads to better results for the reconstruction of the mean changes, whereas the two methods fail to correctly infer the changes in standard deviation. As standard deviation change has a major impact on return level change, there is a need to improve the models and the different techniques regarding the variance changes. (orig.)

  19. Computer-Administered Interviews and Rating Scales

    Science.gov (United States)

    Garb, Howard N.

    2007-01-01

    To evaluate the value of computer-administered interviews and rating scales, the following topics are reviewed in the present article: (a) strengths and weaknesses of structured and unstructured assessment instruments, (b) advantages and disadvantages of computer administration, and (c) the validity and utility of computer-administered interviews…

  20. Verifying a computational method for predicting extreme ground motion

    Science.gov (United States)

    Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

    2011-01-01

    In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

  1. How do the multiple large-scale climate oscillations trigger extreme precipitation?

    Science.gov (United States)

    Shi, Pengfei; Yang, Tao; Xu, Chong-Yu; Yong, Bin; Shao, Quanxi; Li, Zhenya; Wang, Xiaoyan; Zhou, Xudong; Li, Shu

    2017-10-01

    Identifying the links between variations in large-scale climate patterns and precipitation is of tremendous assistance in characterizing surplus or deficit of precipitation, which is especially important for evaluation of local water resources and ecosystems in semi-humid and semi-arid regions. Restricted by current limited knowledge on underlying mechanisms, statistical correlation methods are often used rather than physical based model to characterize the connections. Nevertheless, available correlation methods are generally unable to reveal the interactions among a wide range of climate oscillations and associated effects on precipitation, especially on extreme precipitation. In this work, a probabilistic analysis approach by means of a state-of-the-art Copula-based joint probability distribution is developed to characterize the aggregated behaviors for large-scale climate patterns and their connections to precipitation. This method is employed to identify the complex connections between climate patterns (Atlantic Multidecadal Oscillation (AMO), El Niño-Southern Oscillation (ENSO) and Pacific Decadal Oscillation (PDO)) and seasonal precipitation over a typical semi-humid and semi-arid region, the Haihe River Basin in China. Results show that the interactions among multiple climate oscillations are non-uniform in most seasons and phases. Certain joint extreme phases can significantly trigger extreme precipitation (flood and drought) owing to the amplification effect among climate oscillations.

  2. Extreme value statistics and finite-size scaling at the ecological extinction/laminar-turbulence transition

    Science.gov (United States)

    Shih, Hong-Yan; Goldenfeld, Nigel

    Experiments on transitional turbulence in pipe flow seem to show that turbulence is a transient metastable state since the measured mean lifetime of turbulence puffs does not diverge asymptotically at a critical Reynolds number. Yet measurements reveal that the lifetime scales with Reynolds number in a super-exponential way reminiscent of extreme value statistics, and simulations and experiments in Couette and channel flow exhibit directed percolation type scaling phenomena near a well-defined transition. This universality class arises from the interplay between small-scale turbulence and a large-scale collective zonal flow, which exhibit predator-prey behavior. Why is asymptotically divergent behavior not observed? Using directed percolation and a stochastic individual level model of predator-prey dynamics related to transitional turbulence, we investigate the relation between extreme value statistics and power law critical behavior, and show that the paradox is resolved by carefully defining what is measured in the experiments. We theoretically derive the super-exponential scaling law, and using finite-size scaling, show how the same data can give both super-exponential behavior and power-law critical scaling.

  3. Scale orientated analysis of river width changes due to extreme flood hazards

    Directory of Open Access Journals (Sweden)

    G. Krapesch

    2011-08-01

    Full Text Available This paper analyses the morphological effects of extreme floods (recurrence interval >100 years and examines which parameters best describe the width changes due to erosion based on 5 affected alpine gravel bed rivers in Austria. The research was based on vertical aerial photos of the rivers before and after extreme floods, hydrodynamic numerical models and cross sectional measurements supported by LiDAR data of the rivers. Average width ratios (width after/before the flood were calculated and correlated with different hydraulic parameters (specific stream power, shear stress, flow area, specific discharge. Depending on the geomorphological boundary conditions of the different rivers, a mean width ratio between 1.12 (Lech River and 3.45 (Trisanna River was determined on the reach scale. The specific stream power (SSP best predicted the mean width ratios of the rivers especially on the reach scale and sub reach scale. On the local scale more parameters have to be considered to define the "minimum morphological spatial demand of rivers", which is a crucial parameter for addressing and managing flood hazards and should be used in hazard zone plans and spatial planning.

  4. Regional-Scale High-Latitude Extreme Geoelectric Fields Pertaining to Geomagnetically Induced Currents

    Science.gov (United States)

    Pulkkinen, Antti; Bernabeu, Emanuel; Eichner, Jan; Viljanen, Ari; Ngwira, Chigomezyo

    2015-01-01

    Motivated by the needs of the high-voltage power transmission industry, we use data from the high-latitude IMAGE magnetometer array to study characteristics of extreme geoelectric fields at regional scales. We use 10-s resolution data for years 1993-2013, and the fields are characterized using average horizontal geoelectric field amplitudes taken over station groups that span about 500-km distance. We show that geoelectric field structures associated with localized extremes at single stations can be greatly different from structures associated with regionally uniform geoelectric fields, which are well represented by spatial averages over single stations. Visual extrapolation and rigorous extreme value analysis of spatially averaged fields indicate that the expected range for 1-in-100-year extreme events are 3-8 V/km and 3.4-7.1 V/km, respectively. The Quebec reference ground model is used in the calculations.

  5. A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.

    Science.gov (United States)

    Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua

    2016-05-01

    Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. A large-scale computer facility for computational aerodynamics

    International Nuclear Information System (INIS)

    Bailey, F.R.; Balhaus, W.F.

    1985-01-01

    The combination of computer system technology and numerical modeling have advanced to the point that computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. To provide for further advances in modeling of aerodynamic flow fields, NASA has initiated at the Ames Research Center the Numerical Aerodynamic Simulation (NAS) Program. The objective of the Program is to develop a leading-edge, large-scale computer facility, and make it available to NASA, DoD, other Government agencies, industry and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. The Program will establish an initial operational capability in 1986 and systematically enhance that capability by incorporating evolving improvements in state-of-the-art computer system technologies as required to maintain a leadership role. This paper briefly reviews the present and future requirements for computational aerodynamics and discusses the Numerical Aerodynamic Simulation Program objectives, computational goals, and implementation plans

  7. Lightweight computational steering of very large scale molecular dynamics simulations

    International Nuclear Information System (INIS)

    Beazley, D.M.

    1996-01-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages

  8. Application of the extreme value theory to beam loss estimates in the SPIRAL2 linac based on large scale Monte Carlo computations

    Directory of Open Access Journals (Sweden)

    R. Duperrier

    2006-04-01

    Full Text Available The influence of random perturbations of high intensity accelerator elements on the beam losses is considered. This paper presents the error sensitivity study which has been performed for the SPIRAL2 linac in order to define the tolerances for the construction. The proposed driver aims to accelerate a 5 mA deuteron beam up to 20   A MeV and a 1 mA ion beam for q/A=1/3 up to 14.5 A MeV. It is a continuous wave regime linac, designed for a maximum efficiency in the transmission of intense beams and a tunable energy. It consists in an injector (two   ECRs   sources+LEBTs with the possibility to inject from several sources+radio frequency quadrupole followed by a superconducting section based on an array of independently phased cavities where the transverse focalization is performed with warm quadrupoles. The correction scheme and the expected losses are described. The extreme value theory is used to estimate the expected beam losses. The described method couples large scale computations to obtain probability distribution functions. The bootstrap technique is used to provide confidence intervals associated to the beam loss predictions. With such a method, it is possible to measure the risk to loose a few watts in this high power linac (up to 200 kW.

  9. Analysis of the Extremely Low Frequency Magnetic Field Emission from Laptop Computers

    Directory of Open Access Journals (Sweden)

    Brodić Darko

    2016-03-01

    Full Text Available This study addresses the problem of magnetic field emission produced by the laptop computers. Although, the magnetic field is spread over the entire frequency spectrum, the most dangerous part of it to the laptop users is the frequency range from 50 to 500 Hz, commonly called the extremely low frequency magnetic field. In this frequency region the magnetic field is characterized by high peak values. To examine the influence of laptop’s magnetic field emission in the office, a specific experiment is proposed. It includes the measurement of the magnetic field at six laptop’s positions, which are in close contact to its user. The results obtained from ten different laptop computers show the extremely high emission at some positions, which are dependent on the power dissipation or bad ergonomics. Eventually, the experiment extracts these dangerous positions of magnetic field emission and suggests possible solutions.

  10. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.

    2013-01-01

    As our understanding of the world around us increases it becomes more challenging to make use of what we already know, and to increase our understanding still further. Computational modeling and simulation have become critical tools in addressing this challenge. The requirements of high-resolution, accurate modeling have outstripped the ability of desktop computers and even small clusters to provide the necessary compute power. Many applications in the scientific and engineering domains now need very large amounts of compute time, while other applications, particularly in the life sciences, frequently have large data I/O requirements. There is thus a growing need for a range of high performance applications which can utilize parallel compute systems effectively, which have efficient data handling strategies and which have the capacity to utilise current and future systems. The High Performance and Scientific Applications topic aims to highlight recent progress in the use of advanced computing and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators, and to deal with difficult I/O requirements. © 2013 Springer-Verlag.

  11. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.0)

    Energy Technology Data Exchange (ETDEWEB)

    Hukerikar, Saurabh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Engelmann, Christian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-10-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest that very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Practical limits on power consumption in HPC systems will require future systems to embrace innovative architectures, increasing the levels of hardware and software complexities. The resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. These techniques must seek to improve resilience at reasonable overheads to power consumption and performance. While the HPC community has developed various solutions, application-level as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power eciency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software ecosystems, which are expected to be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience based on the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. The catalog of resilience design patterns provides designers with reusable design elements. We define a design framework that enhances our understanding of the important

  12. Moths produce extremely quiet ultrasonic courtship songs by rubbing specialized scales

    DEFF Research Database (Denmark)

    Nakano, Ryo; Skals, Niels; Takanashi, Takuma

    2008-01-01

    level at 1 cm) adapted for private sexual communication in the Asian corn borer moth, Ostrinia furnacalis. During courtship, the male rubs specialized scales on the wing against those on the thorax to produce the songs, with the wing membrane underlying the scales possibly acting as a sound resonator....... The male's song suppresses the escape behavior of the female, thereby increasing his mating success. Our discovery of extremely low-intensity ultrasonic communication may point to a whole undiscovered world of private communication, using "quiet" ultrasound....

  13. Establishing the Turkish version of the SIGAM mobility scale, and determining its validity and reliability in lower extremity amputees.

    Science.gov (United States)

    Yilmaz, Hülya; Gafuroğlu, Ümit; Ryall, Nicola; Yüksel, Selcen

    2018-02-01

    The aim of this study is to adapt the Special Interest Group in Amputee Medicine (SIGAM) mobility scale to Turkish, and to test its validity and reliability in lower extremity amputees. Adaptation of the scale into Turkish was performed by following the steps in American Association of Orthopedic Surgeons (AAOS) guideline. Turkish version of the scale was tested twice on 109 patients who had lower extremity amputations, at hours 0 and 72. The reliability of the Turkish version was tested for internal consistency and test-retest reliability. Structural validity was tested using the "scale validity" method. For this purpose, the scores of the Short Form-36 (SF-36), Functional Ambulation Scale (FAS), Get Up and Go Test, and Satisfaction with the Prosthesis Questionnaire (SATPRO) were calculated, and analyzed using Spearman's correlation test. Cronbach's alpha coefficient was 0.67 for the Turkish version of the SIGAM mobility scale. Cohen's kappa coefficients were between 0.224 and 0.999. Repeatability according to the results of the SIGAM mobility scale (grades A-F) was 0.822. We found significant and strong positive correlations of the SIGAM mobility scale results with the FAS, Get Up and Go Test, SATPRO, and all of the SF-36 subscales. In our study, the Turkish version of the SIGAM mobility scale was found as a reliable, valid, and easy to use scale in everyday practice for measuring mobility in lower extremity amputees. Implications for Rehabilitation Amputation is the surgical removal of a severely injured and nonfunctional extremity, at a level of one or more bones proximal to the body. Loss of a lower extremity is one of the most important conditions that cause functional disability. The Special Interest Group in Amputee Medicine (SIGAM) mobility scale contains 21 questions that evaluate the mobility of lower extremity amputees. Lack of a specific Turkish scale that evaluates rehabilitation results and mobility of lower extremity amputees, and determines their

  14. Computational applications of DNA physical scales

    DEFF Research Database (Denmark)

    Baldi, Pierre; Chauvin, Yves; Brunak, Søren

    1998-01-01

    that these scales provide an alternative or complementary compact representation of DNA sequences. As an example we construct a strand invariant representation of DNA sequences. The scales can also be used to analyze and discover new DNA structural patterns, especially in combinations with hidden Markov models......The authors study from a computational standpoint several different physical scales associated with structural features of DNA sequences, including dinucleotide scales such as base stacking energy and propellor twist, and trinucleotide scales such as bendability and nucleosome positioning. We show...

  15. Computational applications of DNA structural scales

    DEFF Research Database (Denmark)

    Baldi, P.; Chauvin, Y.; Brunak, Søren

    1998-01-01

    that these scales provide an alternative or complementary compact representation of DNA sequences. As an example, we construct a strand-invariant representation of DNA sequences. The scales can also be used to analyze and discover new DNA structural patterns, especially in combination with hidden Markov models......Studies several different physical scales associated with the structural features of DNA sequences from a computational standpoint, including dinucleotide scales, such as base stacking energy and propeller twist, and trinucleotide scales, such as bendability and nucleosome positioning. We show...

  16. Using GRACE Satellite Gravimetry for Assessing Large-Scale Hydrologic Extremes

    Directory of Open Access Journals (Sweden)

    Alexander Y. Sun

    2017-12-01

    Full Text Available Global assessment of the spatiotemporal variability in terrestrial total water storage anomalies (TWSA in response to hydrologic extremes is critical for water resources management. Using TWSA derived from the gravity recovery and climate experiment (GRACE satellites, this study systematically assessed the skill of the TWSA-climatology (TC approach and breakpoint (BP detection method for identifying large-scale hydrologic extremes. The TC approach calculates standardized anomalies by using the mean and standard deviation of the GRACE TWSA corresponding to each month. In the BP detection method, the empirical mode decomposition (EMD is first applied to identify the mean return period of TWSA extremes, and then a statistical procedure is used to identify the actual occurrence times of abrupt changes (i.e., BPs in TWSA. Both detection methods were demonstrated on basin-averaged TWSA time series for the world’s 35 largest river basins. A nonlinear event coincidence analysis measure was applied to cross-examine abrupt changes detected by these methods with those detected by the Standardized Precipitation Index (SPI. Results show that our EMD-assisted BP procedure is a promising tool for identifying hydrologic extremes using GRACE TWSA data. Abrupt changes detected by the BP method coincide well with those of the SPI anomalies and with documented hydrologic extreme events. Event timings obtained by the TC method were ambiguous for a number of river basins studied, probably because the GRACE data length is too short to derive long-term climatology at this time. The BP approach demonstrates a robust wet-dry anomaly detection capability, which will be important for applications with the upcoming GRACE Follow-On mission.

  17. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    Science.gov (United States)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  18. A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Kwan-Liu [Univ. of California, Davis, CA (United States)

    2017-02-01

    efficient computation on an exascale computer. This project concludes with a functional prototype containing pervasively parallel algorithms that perform demonstratively well on many-core processors. These algorithms are fundamental for performing data analysis and visualization at extreme scale.

  19. Analyzing extreme sea levels for broad-scale impact and adaptation studies

    Science.gov (United States)

    Wahl, T.; Haigh, I. D.; Nicholls, R. J.; Arns, A.; Dangendorf, S.; Hinkel, J.; Slangen, A.

    2017-12-01

    Coastal impact and adaptation assessments require detailed knowledge on extreme sea levels (ESL), because increasing damage due to extreme events is one of the major consequences of sea-level rise (SLR) and climate change. Over the last few decades, substantial research efforts have been directed towards improved understanding of past and future SLR; different scenarios were developed with process-based or semi-empirical models and used for coastal impact studies at various temporal and spatial scales to guide coastal management and adaptation efforts. Uncertainties in future SLR are typically accounted for by analyzing the impacts associated with a range of scenarios and model ensembles. ESL distributions are then displaced vertically according to the SLR scenarios under the inherent assumption that we have perfect knowledge on the statistics of extremes. However, there is still a limited understanding of present-day ESL which is largely ignored in most impact and adaptation analyses. The two key uncertainties stem from: (1) numerical models that are used to generate long time series of storm surge water levels, and (2) statistical models used for determining present-day ESL exceedance probabilities. There is no universally accepted approach to obtain such values for broad-scale flood risk assessments and while substantial research has explored SLR uncertainties, we quantify, for the first time globally, key uncertainties in ESL estimates. We find that contemporary ESL uncertainties exceed those from SLR projections and, assuming that we meet the Paris agreement, the projected SLR itself by the end of the century. Our results highlight the necessity to further improve our understanding of uncertainties in ESL estimates through (1) continued improvement of numerical and statistical models to simulate and analyze coastal water levels and (2) exploit the rich observational database and continue data archeology to obtain longer time series and remove model bias

  20. Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale

    International Nuclear Information System (INIS)

    Engelmann, Christian; Hukerikar, Saurabh

    2017-01-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across

  1. Computational approach on PEB process in EUV resist: multi-scale simulation

    Science.gov (United States)

    Kim, Muyoung; Moon, Junghwan; Choi, Joonmyung; Lee, Byunghoon; Jeong, Changyoung; Kim, Heebom; Cho, Maenghyo

    2017-03-01

    For decades, downsizing has been a key issue for high performance and low cost of semiconductor, and extreme ultraviolet lithography is one of the promising candidates to achieve the goal. As a predominant process in extreme ultraviolet lithography on determining resolution and sensitivity, post exposure bake has been mainly studied by experimental groups, but development of its photoresist is at the breaking point because of the lack of unveiled mechanism during the process. Herein, we provide theoretical approach to investigate underlying mechanism on the post exposure bake process in chemically amplified resist, and it covers three important reactions during the process: acid generation by photo-acid generator dissociation, acid diffusion, and deprotection. Density functional theory calculation (quantum mechanical simulation) was conducted to quantitatively predict activation energy and probability of the chemical reactions, and they were applied to molecular dynamics simulation for constructing reliable computational model. Then, overall chemical reactions were simulated in the molecular dynamics unit cell, and final configuration of the photoresist was used to predict the line edge roughness. The presented multiscale model unifies the phenomena of both quantum and atomic scales during the post exposure bake process, and it will be helpful to understand critical factors affecting the performance of the resulting photoresist and design the next-generation material.

  2. Synchronization and Causality Across Time-scales: Complex Dynamics and Extremes in El Niño/Southern Oscillation

    Science.gov (United States)

    Jajcay, N.; Kravtsov, S.; Tsonis, A.; Palus, M.

    2017-12-01

    A better understanding of dynamics in complex systems, such as the Earth's climate is one of the key challenges for contemporary science and society. A large amount of experimental data requires new mathematical and computational approaches. Natural complex systems vary on many temporal and spatial scales, often exhibiting recurring patterns and quasi-oscillatory phenomena. The statistical inference of causal interactions and synchronization between dynamical phenomena evolving on different temporal scales is of vital importance for better understanding of underlying mechanisms and a key for modeling and prediction of such systems. This study introduces and applies information theory diagnostics to phase and amplitude time series of different wavelet components of the observed data that characterizes El Niño. A suite of significant interactions between processes operating on different time scales was detected, and intermittent synchronization among different time scales has been associated with the extreme El Niño events. The mechanisms of these nonlinear interactions were further studied in conceptual low-order and state-of-the-art dynamical, as well as statistical climate models. Observed and simulated interactions exhibit substantial discrepancies, whose understanding may be the key to an improved prediction. Moreover, the statistical framework which we apply here is suitable for direct usage of inferring cross-scale interactions in nonlinear time series from complex systems such as the terrestrial magnetosphere, solar-terrestrial interactions, seismic activity or even human brain dynamics.

  3. Spatial Scaling of Global Rainfall and Flood Extremes

    Science.gov (United States)

    Devineni, Naresh; Lall, Upmanu; Xi, Chen; Ward, Philip

    2014-05-01

    Floods associated with severe storms are a significant source of risk for property, life and supply chains. These property losses tend to be determined as much by the duration and spatial extent of flooding as by the depth and velocity of inundation. High duration floods are typically induced by persistent rainfall (up to 30 day duration) as seen recently in Thailand, Pakistan, the Ohio and the Mississippi Rivers, France, and Germany. Events related to persistent and recurrent rainfall appear to correspond to the persistence of specific global climate patterns that may be identifiable from global, historical data fields, and also from climate models that project future conditions. In this paper, we investigate the statistical properties of the spatial manifestation of the rainfall exceedances and floods. We present the first ever results on a global analysis of the scaling characteristics of extreme rainfall and flood event duration, volumes and contiguous flooded areas as a result of large scale organization of long duration rainfall events. Results are organized by latitude and with reference to the phases of ENSO, and reveal surprising invariance across latitude. Speculation as to the potential relation to the dynamical factors is presented

  4. Computational biology in the cloud: methods and new insights from computing at scale.

    Science.gov (United States)

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.

  5. Challenges in scaling NLO generators to leadership computers

    Science.gov (United States)

    Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.

    2017-10-01

    Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.

  6. More scalability, less pain: A simple programming model and its implementation for extreme computing

    International Nuclear Information System (INIS)

    Lusk, E.L.; Pieper, S.C.; Butler, R.M.

    2010-01-01

    This is the story of a simple programming model, its implementation for extreme computing, and a breakthrough in nuclear physics. A critical issue for the future of high-performance computing is the programming model to use on next-generation architectures. Described here is a promising approach: program very large machines by combining a simplified programming model with a scalable library implementation. The presentation takes the form of a case study in nuclear physics. The chosen application addresses fundamental issues in the origins of our Universe, while the library developed to enable this application on the largest computers may have applications beyond this one.

  7. Large-scale computing with Quantum Espresso

    International Nuclear Information System (INIS)

    Giannozzi, P.; Cavazzoni, C.

    2009-01-01

    This paper gives a short introduction to Quantum Espresso: a distribution of software for atomistic simulations in condensed-matter physics, chemical physics, materials science, and to its usage in large-scale parallel computing.

  8. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  9. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.1)

    Energy Technology Data Exchange (ETDEWEB)

    Hukerikar, Saurabh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Engelmann, Christian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-12-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Therefore the resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. Also, due to practical limits on power consumption in HPC systems future systems are likely to embrace innovative architectures, increasing the levels of hardware and software complexities. As a result the techniques that seek to improve resilience must navigate the complex trade-off space between resilience and the overheads to power consumption and performance. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power efficiency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience using the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. Each established solution is described in the form of a pattern that

  10. Extreme-Scale Stochastic Particle Tracing for Uncertain Unsteady Flow Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Hanqi; He, Wenbin; Seo, Sangmin; Shen, Han-Wei; Peterka, Tom

    2016-11-13

    We present an efficient and scalable solution to estimate uncertain transport behaviors using stochastic flow maps (SFM,) for visualizing and analyzing uncertain unsteady flows. SFM computation is extremely expensive because it requires many Monte Carlo runs to trace densely seeded particles in the flow. We alleviate the computational cost by decoupling the time dependencies in SFMs so that we can process adjacent time steps independently and then compose them together for longer time periods. Adaptive refinement is also used to reduce the number of runs for each location. We then parallelize over tasks—packets of particles in our design—to achieve high efficiency in MPI/thread hybrid programming. Such a task model also enables CPU/GPU coprocessing. We show the scalability on two supercomputers, Mira (up to 1M Blue Gene/Q cores) and Titan (up to 128K Opteron cores and 8K GPUs), that can trace billions of particles in seconds.

  11. Improving plot- and regional-scale crop models for simulating impacts of climate variability and extremes

    Science.gov (United States)

    Tao, F.; Rötter, R.

    2013-12-01

    Many studies on global climate report that climate variability is increasing with more frequent and intense extreme events1. There are quite large uncertainties from both the plot- and regional-scale models in simulating impacts of climate variability and extremes on crop development, growth and productivity2,3. One key to reducing the uncertainties is better exploitation of experimental data to eliminate crop model deficiencies and develop better algorithms that more adequately capture the impacts of extreme events, such as high temperature and drought, on crop performance4,5. In the present study, in a first step, the inter-annual variability in wheat yield and climate from 1971 to 2012 in Finland was investigated. Using statistical approaches the impacts of climate variability and extremes on wheat growth and productivity were quantified. In a second step, a plot-scale model, WOFOST6, and a regional-scale crop model, MCWLA7, were calibrated and validated, and applied to simulate wheat growth and yield variability from 1971-2012. Next, the estimated impacts of high temperature stress, cold damage, and drought stress on crop growth and productivity based on the statistical approaches, and on crop simulation models WOFOST and MCWLA were compared. Then, the impact mechanisms of climate extremes on crop growth and productivity in the WOFOST model and MCWLA model were identified, and subsequently, the various algorithm and impact functions were fitted against the long-term crop trial data. Finally, the impact mechanisms, algorithms and functions in WOFOST model and MCWLA model were improved to better simulate the impacts of climate variability and extremes, particularly high temperature stress, cold damage and drought stress for location-specific and large area climate impact assessments. Our studies provide a good example of how to improve, in parallel, the plot- and regional-scale models for simulating impacts of climate variability and extremes, as needed for

  12. Assessing Regional Scale Variability in Extreme Value Statistics Under Altered Climate Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Brunsell, Nathaniel [Univ. of Kansas, Lawrence, KS (United States); Mechem, David [Univ. of Kansas, Lawrence, KS (United States); Ma, Chunsheng [Wichita State Univ., KS (United States)

    2015-02-20

    Recent studies have suggested that low-frequency modes of climate variability can significantly influence regional climate. The climatology associated with extreme events has been shown to be particularly sensitive. This has profound implications for droughts, heat waves, and food production. We propose to examine regional climate simulations conducted over the continental United States by applying a recently developed technique which combines wavelet multi–resolution analysis with information theory metrics. This research is motivated by two fundamental questions concerning the spatial and temporal structure of extreme events. These questions are 1) what temporal scales of the extreme value distributions are most sensitive to alteration by low-frequency climate forcings and 2) what is the nature of the spatial structure of variation in these timescales? The primary objective is to assess to what extent information theory metrics can be useful in characterizing the nature of extreme weather phenomena. Specifically, we hypothesize that (1) changes in the nature of extreme events will impact the temporal probability density functions and that information theory metrics will be sensitive these changes and (2) via a wavelet multi–resolution analysis, we will be able to characterize the relative contribution of different timescales on the stochastic nature of extreme events. In order to address these hypotheses, we propose a unique combination of an established regional climate modeling approach and advanced statistical techniques to assess the effects of low-frequency modes on climate extremes over North America. The behavior of climate extremes in RCM simulations for the 20th century will be compared with statistics calculated from the United States Historical Climatology Network (USHCN) and simulations from the North American Regional Climate Change Assessment Program (NARCCAP). This effort will serve to establish the baseline behavior of climate extremes, the

  13. Extreme-scale alignments of quasar optical polarizations and Galactic dust contamination

    OpenAIRE

    Pelgrims, Vincent

    2017-01-01

    Almost twenty years ago the optical polarization vectors from quasars were shown to be aligned over extreme-scales. That evidence was later confirmed and enhanced thanks to additional optical data obtained with the ESO instrument FORS2 mounted on the VLT, in Chile. These observations suggest either Galactic foreground contamination of the data or, more interestingly, a cosmological origin. Using 353-GHz polarization data from the Planck satellite, I recently showed that the main features of t...

  14. Brief Assessment of Motor Function: Content Validity and Reliability of the Upper Extremity Gross Motor Scale

    Science.gov (United States)

    Cintas, Holly Lea; Parks, Rebecca; Don, Sarah; Gerber, Lynn

    2011-01-01

    Content validity and reliability of the Brief Assessment of Motor Function (BAMF) Upper Extremity Gross Motor Scale (UEGMS) were evaluated in this prospective, descriptive study. The UEGMS is one of five BAMF ordinal scales designed for quick documentation of gross, fine, and oral motor skill levels. Designed to be independent of age and…

  15. A Computer-Based Visual Analog Scale,

    Science.gov (United States)

    1992-06-01

    34 keys on the computer keyboard or other input device. The initial position of the arrow is always in the center of the scale to prevent biasing the...3 REFERENCES 1. Gift, A.G., "Visual Analogue Scales: Measurement of Subjective Phenomena." Nursing Research, Vol. 38, pp. 286-288, 1989. 2. Ltmdberg...3. Menkes, D.B., Howard, R.C., Spears, G.F., and Cairns, E.R., "Salivary THC Following Cannabis Smoking Correlates With Subjective Intoxication and

  16. Sensitivity of extreme precipitation to temperature: the variability of scaling factors from a regional to local perspective

    Science.gov (United States)

    Schroeer, K.; Kirchengast, G.

    2018-06-01

    Potential increases in extreme rainfall induced hazards in a warming climate have motivated studies to link precipitation intensities to temperature. Increases exceeding the Clausius-Clapeyron (CC) rate of 6-7%/°C-1 are seen in short-duration, convective, high-percentile rainfall at mid latitudes, but the rates of change cease or revert at regionally variable threshold temperatures due to moisture limitations. It is unclear, however, what these findings mean in term of the actual risk of extreme precipitation on a regional to local scale. When conditioning precipitation intensities on local temperatures, key influences on the scaling relationship such as from the annual cycle and regional weather patterns need better understanding. Here we analyze these influences, using sub-hourly to daily precipitation data from a dense network of 189 stations in south-eastern Austria. We find that the temperature sensitivities in the mountainous western region are lower than in the eastern lowlands. This is due to the different weather patterns that cause extreme precipitation in these regions. Sub-hourly and hourly intensities intensify at super-CC and CC-rates, respectively, up to temperatures of about 17 °C. However, we also find that, because of the regional and seasonal variability of the precipitation intensities, a smaller scaling factor can imply a larger absolute change in intensity. Our insights underline that temperature precipitation scaling requires careful interpretation of the intent and setting of the study. When this is considered, conditional scaling factors can help to better understand which influences control the intensification of rainfall with temperature on a regional scale.

  17. dV/dt - Accelerating the Rate of Progress towards Extreme Scale Collaborative Science

    Energy Technology Data Exchange (ETDEWEB)

    Livny, Miron [Univ. of Wisconsin, Madison, WI (United States)

    2018-01-22

    This report introduces publications that report the results of a project that aimed to design a computational framework that enables computational experimentation at scale while supporting the model of “submit locally, compute globally”. The project focuses on estimating application resource needs, finding the appropriate computing resources, acquiring those resources,deploying the applications and data on the resources, managing applications and resources during run.

  18. Contribution of large-scale midlatitude disturbances to hourly precipitation extremes in the United States

    Science.gov (United States)

    Barbero, Renaud; Abatzoglou, John T.; Fowler, Hayley J.

    2018-02-01

    Midlatitude synoptic weather regimes account for a substantial portion of annual precipitation accumulation as well as multi-day precipitation extremes across parts of the United States (US). However, little attention has been devoted to understanding how synoptic-scale patterns contribute to hourly precipitation extremes. A majority of 1-h annual maximum precipitation (AMP) across the western US were found to be linked to two coherent midlatitude synoptic patterns: disturbances propagating along the jet stream, and cutoff upper-level lows. The influence of these two patterns on 1-h AMP varies geographically. Over 95% of 1-h AMP along the western coastal US were coincident with progressive midlatitude waves embedded within the jet stream, while over 30% of 1-h AMP across the interior western US were coincident with cutoff lows. Between 30-60% of 1-h AMP were coincident with the jet stream across the Ohio River Valley and southeastern US, whereas a a majority of 1-h AMP over the rest of central and eastern US were not found to be associated with either midlatitude synoptic features. Composite analyses for 1-h AMP days coincident to cutoff lows and jet stream show that an anomalous moisture flux and upper-level dynamics are responsible for initiating instability and setting up an environment conducive to 1-h AMP events. While hourly precipitation extremes are generally thought to be purely convective in nature, this study shows that large-scale dynamics and baroclinic disturbances may also contribute to precipitation extremes on sub-daily timescales.

  19. Computer simulations for the nano-scale

    International Nuclear Information System (INIS)

    Stich, I.

    2007-01-01

    A review of methods for computations for the nano-scale is presented. The paper should provide a convenient starting point into computations for the nano-scale as well as a more in depth presentation for those already working in the field of atomic/molecular-scale modeling. The argument is divided in chapters covering the methods for description of the (i) electrons, (ii) ions, and (iii) techniques for efficient solving of the underlying equations. A fairly broad view is taken covering the Hartree-Fock approximation, density functional techniques and quantum Monte-Carlo techniques for electrons. The customary quantum chemistry methods, such as post Hartree-Fock techniques, are only briefly mentioned. Description of both classical and quantum ions is presented. The techniques cover Ehrenfest, Born-Oppenheimer, and Car-Parrinello dynamics. The strong and weak points of both principal and technical nature are analyzed. In the second part we introduce a number of applications to demonstrate the different approximations and techniques introduced in the first part. They cover a wide range of applications such as non-simple liquids, surfaces, molecule-surface interactions, applications in nano technology, etc. These more in depth presentations, while certainly not exhaustive, should provide information on technical aspects of the simulations, typical parameters used, and ways of analysis of the huge amounts of data generated in these large-scale supercomputer simulations. (author)

  20. A Fault Oblivious Extreme-Scale Execution Environment

    Energy Technology Data Exchange (ETDEWEB)

    McKie, Jim

    2014-11-20

    The FOX project, funded under the ASCR X-stack I program, developed systems software and runtime libraries for a new approach to the data and work distribution for massively parallel, fault oblivious application execution. Our work was motivated by the premise that exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today’s machines. To deliver the capability of exascale hardware, the systems software must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massive data analysis in a highly unreliable hardware environment with billions of threads of execution. Our OS research has prototyped new methods to provide efficient resource sharing, synchronization, and protection in a many-core compute node. We have experimented with alternative task/dataflow programming models and shown scalability in some cases to hundreds of thousands of cores. Much of our software is in active development through open source projects. Concepts from FOX are being pursued in next generation exascale operating systems. Our OS work focused on adaptive, application tailored OS services optimized for multi → many core processors. We developed a new operating system NIX that supports role-based allocation of cores to processes which was released to open source. We contributed to the IBM FusedOS project, which promoted the concept of latency-optimized and throughput-optimized cores. We built a task queue library based on distributed, fault tolerant key-value store and identified scaling issues. A second fault tolerant task parallel library was developed, based on the Linda tuple space model, that used low level interconnect primitives for optimized communication. We designed fault tolerance mechanisms for task parallel computations

  1. Further outlooks: extremely uncomfortable; Die weiteren Aussichten: extrem ungemuetlich

    Energy Technology Data Exchange (ETDEWEB)

    Resenhoeft, T.

    2006-07-01

    Climate is changing extremely in the last decades. Scientists dealing with extreme weather, should not only stare at computer simulations. They have also to turn towards psyche, seriously personal experiences, knowing statistics, relativise supposed sensational reports and last not least collecting more data. (GL)

  2. Parallel Computational Fluid Dynamics 2007 : Implementations and Experiences on Large Scale and Grid Computing

    CERN Document Server

    2009-01-01

    At the 19th Annual Conference on Parallel Computational Fluid Dynamics held in Antalya, Turkey, in May 2007, the most recent developments and implementations of large-scale and grid computing were presented. This book, comprised of the invited and selected papers of this conference, details those advances, which are of particular interest to CFD and CFD-related communities. It also offers the results related to applications of various scientific and engineering problems involving flows and flow-related topics. Intended for CFD researchers and graduate students, this book is a state-of-the-art presentation of the relevant methodology and implementation techniques of large-scale computing.

  3. Kinetic turbulence simulations at extreme scale on leadership-class systems

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Bei [Princeton Univ., Princeton, NJ (United States); Ethier, Stephane [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Tang, William [Princeton Univ., Princeton, NJ (United States); Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Williams, Timothy [Argonne National Lab. (ANL), Argonne, IL (United States); Ibrahim, Khaled Z. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Madduri, Kamesh [The Pennsylvania State Univ., University Park, PA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2013-01-01

    Reliable predictive simulation capability addressing confinement properties in magnetically confined fusion plasmas is critically-important for ITER, a 20 billion dollar international burning plasma device under construction in France. The complex study of kinetic turbulence, which can severely limit the energy confinement and impact the economic viability of fusion systems, requires simulations at extreme scale for such an unprecedented device size. Our newly optimized, global, ab initio particle-in-cell code solving the nonlinear equations underlying gyrokinetic theory achieves excellent performance with respect to "time to solution" at the full capacity of the IBM Blue Gene/Q on 786,432 cores of Mira at ALCF and recently of the 1,572,864 cores of Sequoia at LLNL. Recent multithreading and domain decomposition optimizations in the new GTC-P code represent critically important software advances for modern, low memory per core systems by enabling routine simulations at unprecedented size (130 million grid points ITER-scale) and resolution (65 billion particles).

  4. Critical exponents of extremal Kerr perturbations

    Science.gov (United States)

    Gralla, Samuel E.; Zimmerman, Peter

    2018-05-01

    We show that scalar, electromagnetic, and gravitational perturbations of extremal Kerr black holes are asymptotically self-similar under the near-horizon, late-time scaling symmetry of the background metric. This accounts for the Aretakis instability (growth of transverse derivatives) as a critical phenomenon associated with the emergent symmetry. We compute the critical exponent of each mode, which is equivalent to its decay rate. It follows from symmetry arguments that, despite the growth of transverse derivatives, all generally covariant scalar quantities decay to zero.

  5. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  6. Modelling of spatio-temporal precipitation relevant for urban hydrology with focus on scales, extremes and climate change

    DEFF Research Database (Denmark)

    Sørup, Hjalte Jomo Danielsen

    -correlation lengths for sub-daily extreme precipitation besides having too low intensities. Especially the wrong spatial correlation structure is disturbing from an urban hydrological point of view as short-term extremes will cover too much ground if derived directly from bias corrected regional climate model output...... of precipitation are compared and used to rank climate models with respect to performance metrics. The four different observational data sets themselves are compared at daily temporal scale with respect to climate indices for mean and extreme precipitation. Data density seems to be a crucial parameter for good...... happening in summer and most of the daily extremes in fall. This behaviour is in good accordance with reality where short term extremes originate in convective precipitation cells that occur when it is very warm and longer term extremes originate in frontal systems that dominate the fall and winter seasons...

  7. Extreme events in total ozone: Spatio-temporal analysis from local to global scale

    Science.gov (United States)

    Rieder, Harald E.; Staehelin, Johannes; Maeder, Jörg A.; Ribatet, Mathieu; di Rocco, Stefania; Jancso, Leonhardt M.; Peter, Thomas; Davison, Anthony C.

    2010-05-01

    dynamics (NAO, ENSO) on total ozone is a global feature in the northern mid-latitudes (Rieder et al., 2010c). In a next step frequency distributions of extreme events are analyzed on global scale (northern and southern mid-latitudes). A specific focus here is whether findings gained through analysis of long-term European ground based stations can be clearly identified as a global phenomenon. By showing results from these three types of studies an overview of extreme events in total ozone (and the dynamical and chemical features leading to those) will be presented from local to global scales. References: Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part I: Application of extreme value theory, to be submitted to ACPD. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part II: Fingerprints of atmospheric dynamics and chemistry and effects on mean values and long-term changes, to be submitted to ACPD. Rieder, H.E., Jancso, L., Staehelin, J., Maeder, J.A., Ribatet, Peter, T., and A.D., Davison (2010): Extreme events in total ozone over the northern mid-latitudes: A case study based on long-term data sets from 5 ground-based stations, in preparation. Staehelin, J., Renaud, A., Bader, J., McPeters, R., Viatte, P., Hoegger, B., Bugnion, V., Giroud, M., and Schill, H.: Total ozone series at Arosa (Switzerland): Homogenization and data comparison, J. Geophys. Res., 103(D5), 5827-5842, doi:10.1029/97JD02402, 1998a. Staehelin, J., Kegel, R., and Harris, N. R.: Trend analysis of the homogenized total ozone series of Arosa

  8. Cloud computing as a new technology trend in education

    OpenAIRE

    Шамина, Ольга Борисовна; Буланова, Татьяна Валентиновна

    2014-01-01

    The construction and operation of extremely large-scale, commodity-computer datacenters was the key necessary enabler of Cloud Computing. Cloud Computing could offer services make a good profit for using in education. With Cloud Computing it is possible to increase the quality of education, improve communicative culture and give to teachers and students new application opportunities.

  9. Multi-scale analysis of lung computed tomography images

    CERN Document Server

    Gori, I; Fantacci, M E; Preite Martinez, A; Retico, A; De Mitri, I; Donadio, S; Fulcheri, C

    2007-01-01

    A computer-aided detection (CAD) system for the identification of lung internal nodules in low-dose multi-detector helical Computed Tomography (CT) images was developed in the framework of the MAGIC-5 project. The three modules of our lung CAD system, a segmentation algorithm for lung internal region identification, a multi-scale dot-enhancement filter for nodule candidate selection and a multi-scale neural technique for false positive finding reduction, are described. The results obtained on a dataset of low-dose and thin-slice CT scans are shown in terms of free response receiver operating characteristic (FROC) curves and discussed.

  10. Large Scale Influences on Summertime Extreme Precipitation in the Northeastern United States

    Science.gov (United States)

    Collow, Allison B. Marquardt; Bosilovich, Michael G.; Koster, Randal Dean

    2016-01-01

    Observations indicate that over the last few decades there has been a statistically significant increase in precipitation in the northeastern United States and that this can be attributed to an increase in precipitation associated with extreme precipitation events. Here a state-of-the-art atmospheric reanalysis is used to examine such events in detail. Daily extreme precipitation events defined at the 75th and 95th percentile from gridded gauge observations are identified for a selected region within the Northeast. Atmospheric variables from the Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2), are then composited during these events to illustrate the time evolution of associated synoptic structures, with a focus on vertically integrated water vapor fluxes, sea level pressure, and 500-hectopascal heights. Anomalies of these fields move into the region from the northwest, with stronger anomalies present in the 95th percentile case. Although previous studies show tropical cyclones are responsible for the most intense extreme precipitation events, only 10 percent of the events in this study are caused by tropical cyclones. On the other hand, extreme events resulting from cutoff low pressure systems have increased. The time period of the study was divided in half to determine how the mean composite has changed over time. An arc of lower sea level pressure along the East Coast and a change in the vertical profile of equivalent potential temperature suggest a possible increase in the frequency or intensity of synoptic-scale baroclinic disturbances.

  11. A direct method for computing extreme value (Gumbel) parameters for gapped biological sequence alignments.

    Science.gov (United States)

    Quinn, Terrance; Sinkala, Zachariah

    2014-01-01

    We develop a general method for computing extreme value distribution (Gumbel, 1958) parameters for gapped alignments. Our approach uses mixture distribution theory to obtain associated BLOSUM matrices for gapped alignments, which in turn are used for determining significance of gapped alignment scores for pairs of biological sequences. We compare our results with parameters already obtained in the literature.

  12. Maintaining scale as a realiable computational system for criticality safety analysis

    International Nuclear Information System (INIS)

    Bowmann, S.M.; Parks, C.V.; Martin, S.K.

    1995-01-01

    Accurate and reliable computational methods are essential for nuclear criticality safety analyses. The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer code system was originally developed at Oak Ridge National Laboratory (ORNL) to enable users to easily set up and perform criticality safety analyses, as well as shielding, depletion, and heat transfer analyses. Over the fifteen-year life of SCALE, the mainstay of the system has been the criticality safety analysis sequences that have featured the KENO-IV and KENO-V.A Monte Carlo codes and the XSDRNPM one-dimensional discrete-ordinates code. The criticality safety analysis sequences provide automated material and problem-dependent resonance processing for each criticality calculation. This report details configuration management which is essential because SCALE consists of more than 25 computer codes (referred to as modules) that share libraries of commonly used subroutines. Changes to a single subroutine in some cases affect almost every module in SCALE exclamation point Controlled access to program source and executables and accurate documentation of modifications are essential to maintaining SCALE as a reliable code system. The modules and subroutine libraries in SCALE are programmed by a staff of approximately ten Code Managers. The SCALE Software Coordinator maintains the SCALE system and is the only person who modifies the production source, executables, and data libraries. All modifications must be authorized by the SCALE Project Leader prior to implementation

  13. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.; Roller, Sabine P.; Seitsonen, Ari Paavo; Valcke, Sophie; Keyes, David E.; Sawley, Marie Christine; Schulthess, Thomas C.; Shalf, John M.

    2013-01-01

    and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators

  14. Toward Improving Predictability of Extreme Hydrometeorological Events: the Use of Multi-scale Climate Modeling in the Northern High Plains

    Science.gov (United States)

    Munoz-Arriola, F.; Torres-Alavez, J.; Mohamad Abadi, A.; Walko, R. L.

    2014-12-01

    Our goal is to investigate possible sources of predictability of hydrometeorological extreme events in the Northern High Plains. Hydrometeorological extreme events are considered the most costly natural phenomena. Water deficits and surpluses highlight how the water-climate interdependence becomes crucial in areas where single activities drive economies such as Agriculture in the NHP. Nonetheless we recognize the Water-Climate interdependence and the regulatory role that human activities play, we still grapple to identify what sources of predictability could be added to flood and drought forecasts. To identify the benefit of multi-scale climate modeling and the role of initial conditions on flood and drought predictability on the NHP, we use the Ocean Land Atmospheric Model (OLAM). OLAM is characterized by a dynamic core with a global geodesic grid with hexagonal (and variably refined) mesh cells and a finite volume discretization of the full compressible Navier Stokes equations, a cut-grid cell method for topography (that reduces error in computational gradient computation and anomalous vertical dispersion). Our hypothesis is that wet conditions will drive OLAM's simulations of precipitation to wetter conditions affecting both flood forecast and drought forecast. To test this hypothesis we simulate precipitation during identified historical flood events followed by drought events in the NHP (i.e. 2011-2012 years). We initialized OLAM with CFS-data 1-10 days previous to a flooding event (as initial conditions) to explore (1) short-term and high-resolution and (2) long-term and coarse-resolution simulations of flood and drought events, respectively. While floods are assessed during a maximum of 15-days refined-mesh simulations, drought is evaluated during the following 15 months. Simulated precipitation will be compared with the Sub-continental Observation Dataset, a gridded 1/16th degree resolution data obtained from climatological stations in Canada, US, and

  15. Simple, parallel, high-performance virtual machines for extreme computations

    International Nuclear Information System (INIS)

    Chokoufe Nejad, Bijan; Ohl, Thorsten; Reuter, Jurgen

    2014-11-01

    We introduce a high-performance virtual machine (VM) written in a numerically fast language like Fortran or C to evaluate very large expressions. We discuss the general concept of how to perform computations in terms of a VM and present specifically a VM that is able to compute tree-level cross sections for any number of external legs, given the corresponding byte code from the optimal matrix element generator, O'Mega. Furthermore, this approach allows to formulate the parallel computation of a single phase space point in a simple and obvious way. We analyze hereby the scaling behaviour with multiple threads as well as the benefits and drawbacks that are introduced with this method. Our implementation of a VM can run faster than the corresponding native, compiled code for certain processes and compilers, especially for very high multiplicities, and has in general runtimes in the same order of magnitude. By avoiding the tedious compile and link steps, which may fail for source code files of gigabyte sizes, new processes or complex higher order corrections that are currently out of reach could be evaluated with a VM given enough computing power.

  16. Characterization and prediction of extreme events in turbulence

    Science.gov (United States)

    Fonda, Enrico; Iyer, Kartik P.; Sreenivasan, Katepalli R.

    2017-11-01

    Extreme events in Nature such as tornadoes, large floods and strong earthquakes are rare but can have devastating consequences. The predictability of these events is very limited at present. Extreme events in turbulence are the very large events in small scales that are intermittent in character. We examine events in energy dissipation rate and enstrophy which are several tens to hundreds to thousands of times the mean value. To this end we use our DNS database of homogeneous and isotropic turbulence with Taylor Reynolds numbers spanning a decade, computed with different small scale resolutions and different box sizes, and study the predictability of these events using machine learning. We start with an aggressive data augmentation to virtually increase the number of these rare events by two orders of magnitude and train a deep convolutional neural network to predict their occurrence in an independent data set. The goal of the work is to explore whether extreme events can be predicted with greater assurance than can be done by conventional methods (e.g., D.A. Donzis & K.R. Sreenivasan, J. Fluid Mech. 647, 13-26, 2010).

  17. Optimization and large scale computation of an entropy-based moment closure

    Science.gov (United States)

    Kristopher Garrett, C.; Hauck, Cory; Hill, Judith

    2015-12-01

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  18. Measurement Properties of the Lower Extremity Functional Scale: A Systematic Review.

    Science.gov (United States)

    Mehta, Saurabh P; Fulton, Allison; Quach, Cedric; Thistle, Megan; Toledo, Cesar; Evans, Neil A

    2016-03-01

    Systematic review of measurement properties. Many primary studies have examined the measurement properties, such as reliability, validity, and sensitivity to change, of the Lower Extremity Functional Scale (LEFS) in different clinical populations. A systematic review summarizing these properties for the LEFS may provide an important resource. To locate and synthesize evidence on the measurement properties of the LEFS and to discuss the clinical implications of the evidence. A literature search was conducted in 4 databases (PubMed, MEDLINE, Embase, and CINAHL), using predefined search terms. Two reviewers performed a critical appraisal of the included studies using a standardized assessment form. A total of 27 studies were included in the review, of which 18 achieved a very good to excellent methodological quality level. The LEFS scores demonstrated excellent test-retest reliability (intraclass correlation coefficients ranging between 0.85 and 0.99) and demonstrated the expected relationships with measures assessing similar constructs (Pearson correlation coefficient values of greater than 0.7). The responsiveness of the LEFS scores was excellent, as suggested by consistently high effect sizes (greater than 0.8) in patients with different lower extremity conditions. Minimal detectable change at the 90% confidence level (MDC90) for the LEFS scores varied between 8.1 and 15.3 across different reassessment intervals in a wide range of patient populations. The pooled estimate of the MDC90 was 6 points and the minimal clinically important difference was 9 points in patients with lower extremity musculoskeletal conditions, which are indicative of true change and clinically meaningful change, respectively. The results of this review support the reliability, validity, and responsiveness of the LEFS scores for assessing functional impairment in a wide array of patient groups with lower extremity musculoskeletal conditions.

  19. Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-07-24

    The primary challenge motivating this team’s work is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who are able to perform analysis only on a small fraction of the data they compute, resulting in the very real likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, an approach that is known as in situ processing. The idea in situ processing was not new at the time of the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by DOE science projects. In large, our objective was produce and enable use of production-quality in situ methods and infrastructure, at scale, on DOE HPC facilities, though we expected to have impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve that objective, we assembled a unique team of researchers consisting of representatives from DOE national laboratories, academia, and industry, and engaged in software technology R&D, as well as engaged in close partnerships with DOE science code teams, to produce software technologies that were shown to run effectively at scale on DOE HPC platforms.

  20. Development of a small-scale computer cluster

    Science.gov (United States)

    Wilhelm, Jay; Smith, Justin T.; Smith, James E.

    2008-04-01

    An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.

  1. Combinations of large-scale circulation anomalies conducive to precipitation extremes in the Czech Republic

    Czech Academy of Sciences Publication Activity Database

    Kašpar, Marek; Müller, Miloslav

    2014-01-01

    Roč. 138, March 2014 (2014), s. 205-212 ISSN 0169-8095 R&D Projects: GA ČR(CZ) GAP209/11/1990 Institutional support: RVO:68378289 Keywords : precipitation extreme * synoptic-scale cause * re-analysis * circulation anomaly Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 2.844, year: 2014 http://www.sciencedirect.com/science/article/pii/S0169809513003372

  2. Front-end vision and multi-scale image analysis multi-scale computer vision theory and applications, written in Mathematica

    CERN Document Server

    Romeny, Bart M Haar

    2008-01-01

    Front-End Vision and Multi-Scale Image Analysis is a tutorial in multi-scale methods for computer vision and image processing. It builds on the cross fertilization between human visual perception and multi-scale computer vision (`scale-space') theory and applications. The multi-scale strategies recognized in the first stages of the human visual system are carefully examined, and taken as inspiration for the many geometric methods discussed. All chapters are written in Mathematica, a spectacular high-level language for symbolic and numerical manipulations. The book presents a new and effective

  3. The spatial return level of aggregated hourly extreme rainfall in Peninsular Malaysia

    Science.gov (United States)

    Shaffie, Mardhiyyah; Eli, Annazirin; Wan Zin, Wan Zawiah; Jemain, Abdul Aziz

    2015-07-01

    This paper is intended to ascertain the spatial pattern of extreme rainfall distribution in Peninsular Malaysia at several short time intervals, i.e., on hourly basis. Motivation of this research is due to historical records of extreme rainfall in Peninsular Malaysia, whereby many hydrological disasters at this region occur within a short time period. The hourly periods considered are 1, 2, 3, 6, 12, and 24 h. Many previous hydrological studies dealt with daily rainfall data; thus, this study enables comparison to be made on the estimated performances between daily and hourly rainfall data analyses so as to identify the impact of extreme rainfall at a shorter time scale. Return levels based on the time aggregate considered are also computed. Parameter estimation using L-moment method for four probability distributions, namely, the generalized extreme value (GEV), generalized logistic (GLO), generalized Pareto (GPA), and Pearson type III (PE3) distributions were conducted. Aided with the L-moment diagram test and mean square error (MSE) test, GLO was found to be the most appropriate distribution to represent the extreme rainfall data. At most time intervals (10, 50, and 100 years), the spatial patterns revealed that the rainfall distribution across the peninsula differ for 1- and 24-h extreme rainfalls. The outcomes of this study would provide additional information regarding patterns of extreme rainfall in Malaysia which may not be detected when considering only a higher time scale such as daily; thus, appropriate measures for shorter time scales of extreme rainfall can be planned. The implementation of such measures would be beneficial to the authorities to reduce the impact of any disastrous natural event.

  4. Final Report Extreme Computing and U.S. Competitiveness DOE Award. DE-FG02-11ER26087/DE-SC0008764

    Energy Technology Data Exchange (ETDEWEB)

    Mustain, Christopher J. [Council on Competitiveness, Washington, DC (United States)

    2016-01-13

    The Council has acted on each of the grant deliverables during the funding period. The deliverables are: (1) convening the Council’s High Performance Computing Advisory Committee (HPCAC) on a bi-annual basis; (2) broadening public awareness of high performance computing (HPC) and exascale developments; (3) assessing the industrial applications of extreme computing; and (4) establishing a policy and business case for an exascale economy.

  5. Developing a New Computer Game Attitude Scale for Taiwanese Early Adolescents

    Science.gov (United States)

    Liu, Eric Zhi-Feng; Lee, Chun-Yi; Chen, Jen-Huang

    2013-01-01

    With ever increasing exposure to computer games, gaining an understanding of the attitudes held by young adolescents toward such activities is crucial; however, few studies have provided scales with which to accomplish this. This study revisited the Computer Game Attitude Scale developed by Chappell and Taylor in 1997, reworking the overall…

  6. Effect of Variable Spatial Scales on USLE-GIS Computations

    Science.gov (United States)

    Patil, R. J.; Sharma, S. K.

    2017-12-01

    Use of appropriate spatial scale is very important in Universal Soil Loss Equation (USLE) based spatially distributed soil erosion modelling. This study aimed at assessment of annual rates of soil erosion at different spatial scales/grid sizes and analysing how changes in spatial scales affect USLE-GIS computations using simulation and statistical variabilities. Efforts have been made in this study to recommend an optimum spatial scale for further USLE-GIS computations for management and planning in the study area. The present research study was conducted in Shakkar River watershed, situated in Narsinghpur and Chhindwara districts of Madhya Pradesh, India. Remote Sensing and GIS techniques were integrated with Universal Soil Loss Equation (USLE) to predict spatial distribution of soil erosion in the study area at four different spatial scales viz; 30 m, 50 m, 100 m, and 200 m. Rainfall data, soil map, digital elevation model (DEM) and an executable C++ program, and satellite image of the area were used for preparation of the thematic maps for various USLE factors. Annual rates of soil erosion were estimated for 15 years (1992 to 2006) at four different grid sizes. The statistical analysis of four estimated datasets showed that sediment loss dataset at 30 m spatial scale has a minimum standard deviation (2.16), variance (4.68), percent deviation from observed values (2.68 - 18.91 %), and highest coefficient of determination (R2 = 0.874) among all the four datasets. Thus, it is recommended to adopt this spatial scale for USLE-GIS computations in the study area due to its minimum statistical variability and better agreement with the observed sediment loss data. This study also indicates large scope for use of finer spatial scales in spatially distributed soil erosion modelling.

  7. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  8. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  9. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    OpenAIRE

    Qiang Liu; Yi Qin; Guodong Li

    2018-01-01

    Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal...

  10. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2018-05-01

    Full Text Available Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal computer, a Graphics Processing Unit (GPU-based, high-performance computing method using the OpenACC application was adopted to parallelize the shallow water model. An unstructured data management method was presented to control the data transportation between the GPU and CPU (Central Processing Unit with minimum overhead, and then both computation and data were offloaded from the CPU to the GPU, which exploited the computational capability of the GPU as much as possible. The parallel model was validated using various benchmarks and real-world case studies. The results demonstrate that speed-ups of up to one order of magnitude can be achieved in comparison with the serial model. The proposed parallel model provides a fast and reliable tool with which to quickly assess flood hazards in large-scale areas and, thus, has a bright application prospect for dynamic inundation risk identification and disaster assessment.

  11. A computer literacy scale for newly enrolled nursing college students: development and validation.

    Science.gov (United States)

    Lin, Tung-Cheng

    2011-12-01

    Increasing application and use of information systems and mobile technologies in the healthcare industry require increasing nurse competency in computer use. Computer literacy is defined as basic computer skills, whereas computer competency is defined as the computer skills necessary to accomplish job tasks. Inadequate attention has been paid to computer literacy and computer competency scale validity. This study developed a computer literacy scale with good reliability and validity and investigated the current computer literacy of newly enrolled students to develop computer courses appropriate to students' skill levels and needs. This study referenced Hinkin's process to develop a computer literacy scale. Participants were newly enrolled first-year undergraduate students, with nursing or nursing-related backgrounds, currently attending a course entitled Information Literacy and Internet Applications. Researchers examined reliability and validity using confirmatory factor analysis. The final version of the developed computer literacy scale included six constructs (software, hardware, multimedia, networks, information ethics, and information security) and 22 measurement items. Confirmatory factor analysis showed that the scale possessed good content validity, reliability, convergent validity, and discriminant validity. This study also found that participants earned the highest scores for the network domain and the lowest score for the hardware domain. With increasing use of information technology applications, courses related to hardware topic should be increased to improve nurse problem-solving abilities. This study recommends that emphases on word processing and network-related topics may be reduced in favor of an increased emphasis on database, statistical software, hospital information systems, and information ethics.

  12. Scalable Parallel Methods for Analyzing Metagenomics Data at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Daily, Jeffrey A. [Washington State Univ., Pullman, WA (United States)

    2015-05-01

    The field of bioinformatics and computational biology is currently experiencing a data revolution. The exciting prospect of making fundamental biological discoveries is fueling the rapid development and deployment of numerous cost-effective, high-throughput next-generation sequencing technologies. The result is that the DNA and protein sequence repositories are being bombarded with new sequence information. Databases are continuing to report a Moore’s law-like growth trajectory in their database sizes, roughly doubling every 18 months. In what seems to be a paradigm-shift, individual projects are now capable of generating billions of raw sequence data that need to be analyzed in the presence of already annotated sequence information. While it is clear that data-driven methods, such as sequencing homology detection, are becoming the mainstay in the field of computational life sciences, the algorithmic advancements essential for implementing complex data analytics at scale have mostly lagged behind. Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or “homologous”) on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. In this dissertation, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for a collection of 2.56M sequences show parallel efficiencies of ~75-100% on up to 8K cores

  13. Scalable Parallel Methods for Analyzing Metagenomics Data at Extreme Scale

    International Nuclear Information System (INIS)

    Daily, Jeffrey A.

    2015-01-01

    The field of bioinformatics and computational biology is currently experiencing a data revolution. The exciting prospect of making fundamental biological discoveries is fueling the rapid development and deployment of numerous cost-effective, high-throughput next-generation sequencing technologies. The result is that the DNA and protein sequence repositories are being bombarded with new sequence information. Databases are continuing to report a Moore's law-like growth trajectory in their database sizes, roughly doubling every 18 months. In what seems to be a paradigm-shift, individual projects are now capable of generating billions of raw sequence data that need to be analyzed in the presence of already annotated sequence information. While it is clear that data-driven methods, such as sequencing homology detection, are becoming the mainstay in the field of computational life sciences, the algorithmic advancements essential for implementing complex data analytics at scale have mostly lagged behind. Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or 'homologous') on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. In this dissertation, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for a collection of 2.56M sequences show parallel efficiencies of ~75-100% on up to 8K

  14. Power-law scaling of extreme dynamics near higher-order exceptional points

    Science.gov (United States)

    Zhong, Q.; Christodoulides, D. N.; Khajavikhan, M.; Makris, K. G.; El-Ganainy, R.

    2018-02-01

    We investigate the extreme dynamics of non-Hermitian systems near higher-order exceptional points in photonic networks constructed using the bosonic algebra method. We show that strong power oscillations for certain initial conditions can occur as a result of the peculiar eigenspace geometry and its dimensionality collapse near these singularities. By using complementary numerical and analytical approaches, we show that, in the parity-time (PT ) phase near exceptional points, the logarithm of the maximum optical power amplification scales linearly with the order of the exceptional point. We focus in our discussion on photonic systems, but we note that our results apply to other physical systems as well.

  15. High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing

    Science.gov (United States)

    Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.

    2015-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.

  16. Rain Characteristics and Large-Scale Environments of Precipitation Objects with Extreme Rain Volumes from TRMM Observations

    Science.gov (United States)

    Zhou, Yaping; Lau, William K M.; Liu, Chuntao

    2013-01-01

    This study adopts a "precipitation object" approach by using 14 years of Tropical Rainfall Measuring Mission (TRMM) Precipitation Feature (PF) and National Centers for Environmental Prediction (NCEP) reanalysis data to study rainfall structure and environmental factors associated with extreme heavy rain events. Characteristics of instantaneous extreme volumetric PFs are examined and compared to those of intermediate and small systems. It is found that instantaneous PFs exhibit a much wider scale range compared to the daily gridded precipitation accumulation range. The top 1% of the rainiest PFs contribute over 55% of total rainfall and have 2 orders of rain volume magnitude greater than those of the median PFs. We find a threshold near the top 10% beyond which the PFs grow exponentially into larger, deeper, and colder rain systems. NCEP reanalyses show that midlevel relative humidity and total precipitable water increase steadily with increasingly larger PFs, along with a rapid increase of 500 hPa upward vertical velocity beyond the top 10%. This provides the necessary moisture convergence to amplify and sustain the extreme events. The rapid increase in vertical motion is associated with the release of convective available potential energy (CAPE) in mature systems, as is evident in the increase in CAPE of PFs up to 10% and the subsequent dropoff. The study illustrates distinct stages in the development of an extreme rainfall event including: (1) a systematic buildup in large-scale temperature and moisture, (2) a rapid change in rain structure, (3) explosive growth of the PF size, and (4) a release of CAPE before the demise of the event.

  17. Development of small scale cluster computer for numerical analysis

    Science.gov (United States)

    Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.

    2017-09-01

    In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.

  18. Effort-reward imbalance and one-year change in neck-shoulder and upper extremity pain among call center computer operators.

    Science.gov (United States)

    Krause, Niklas; Burgel, Barbara; Rempel, David

    2010-01-01

    The literature on psychosocial job factors and musculoskeletal pain is inconclusive in part due to insufficient control for confounding by biomechanical factors. The aim of this study was to investigate prospectively the independent effects of effort-reward imbalance (ERI) at work on regional musculoskeletal pain of the neck and upper extremities of call center operators after controlling for (i) duration of computer use both at work and at home, (ii) ergonomic workstation design, (iii) physical activities during leisure time, and (iv) other individual worker characteristics. This was a one-year prospective study among 165 call center operators who participated in a randomized ergonomic intervention trial that has been described previously. Over an approximate four-week period, we measured ERI and 28 potential confounders via a questionnaire at baseline. Regional upper-body pain and computer use was measured by weekly surveys for up to 12 months following the implementation of ergonomic interventions. Regional pain change scores were calculated as the difference between average weekly pain scores pre- and post intervention. A significant relationship was found between high average ERI ratios and one-year increases in right upper-extremity pain after adjustment for pre-intervention regional mean pain score, current and past physical workload, ergonomic workstation design, and anthropometric, sociodemographic, and behavioral risk factors. No significant associations were found with change in neck-shoulder or left upper-extremity pain. This study suggests that ERI predicts regional upper-extremity pain in -computer operators working >or=20 hours per week. Control for physical workload and ergonomic workstation design was essential for identifying ERI as a risk factor.

  19. Validity and Reliability of the Upper Extremity Work Demands Scale.

    Science.gov (United States)

    Jacobs, Nora W; Berduszek, Redmar J; Dijkstra, Pieter U; van der Sluis, Corry K

    2017-12-01

    Purpose To evaluate validity and reliability of the upper extremity work demands (UEWD) scale. Methods Participants from different levels of physical work demands, based on the Dictionary of Occupational Titles categories, were included. A historical database of 74 workers was added for factor analysis. Criterion validity was evaluated by comparing observed and self-reported UEWD scores. To assess structural validity, a factor analysis was executed. For reliability, the difference between two self-reported UEWD scores, the smallest detectable change (SDC), test-retest reliability and internal consistency were determined. Results Fifty-four participants were observed at work and 51 of them filled in the UEWD twice with a mean interval of 16.6 days (SD 3.3, range = 10-25 days). Criterion validity of the UEWD scale was moderate (r = .44, p = .001). Factor analysis revealed that 'force and posture' and 'repetition' subscales could be distinguished with Cronbach's alpha of .79 and .84, respectively. Reliability was good; there was no significant difference between repeated measurements. An SDC of 5.0 was found. Test-retest reliability was good (intraclass correlation coefficient for agreement = .84) and all item-total correlations were >.30. There were two pairs of highly related items. Conclusion Reliability of the UEWD scale was good, but criterion validity was moderate. Based on current results, a modified UEWD scale (2 items removed, 1 item reworded, divided into 2 subscales) was proposed. Since observation appeared to be an inappropriate gold standard, we advise to investigate other types of validity, such as construct validity, in further research.

  20. Large scale computing in theoretical physics: Example QCD

    International Nuclear Information System (INIS)

    Schilling, K.

    1986-01-01

    The limitations of the classical mathematical analysis of Newton and Leibniz appear to be more and more overcome by the power of modern computers. Large scale computing techniques - which resemble closely the methods used in simulations within statistical mechanics - allow to treat nonlinear systems with many degrees of freedom such as field theories in nonperturbative situations, where analytical methods do fail. The computation of the hadron spectrum within the framework of lattice QCD sets a demanding goal for the application of supercomputers in basic science. It requires both big computer capacities and clever algorithms to fight all the numerical evils that one encounters in the Euclidean world. The talk will attempt to describe both the computer aspects and the present state of the art of spectrum calculations within lattice QCD. (orig.)

  1. Extreme scale multi-physics simulations of the tsunamigenic 2004 Sumatra megathrust earthquake

    Science.gov (United States)

    Ulrich, T.; Gabriel, A. A.; Madden, E. H.; Wollherr, S.; Uphoff, C.; Rettenberger, S.; Bader, M.

    2017-12-01

    SeisSol (www.seissol.org) is an open-source software package based on an arbitrary high-order derivative Discontinuous Galerkin method (ADER-DG). It solves spontaneous dynamic rupture propagation on pre-existing fault interfaces according to non-linear friction laws, coupled to seismic wave propagation with high-order accuracy in space and time (minimal dispersion errors). SeisSol exploits unstructured meshes to account for complex geometries, e.g. high resolution topography and bathymetry, 3D subsurface structure, and fault networks. We present the up-to-date largest (1500 km of faults) and longest (500 s) dynamic rupture simulation modeling the 2004 Sumatra-Andaman earthquake. We demonstrate the need for end-to-end-optimization and petascale performance of scientific software to realize realistic simulations on the extreme scales of subduction zone earthquakes: Considering the full complexity of subduction zone geometries leads inevitably to huge differences in element sizes. The main code improvements include a cache-aware wave propagation scheme and optimizations of the dynamic rupture kernels using code generation. In addition, a novel clustered local-time-stepping scheme for dynamic rupture has been established. Finally, asynchronous output has been implemented to overlap I/O and compute time. We resolve the frictional sliding process on the curved mega-thrust and a system of splay faults, as well as the seismic wave field and seafloor displacement with frequency content up to 2.2 Hz. We validate the scenario by geodetic, seismological and tsunami observations. The resulting rupture dynamics shed new light on the activation and importance of splay faults.

  2. Understanding extreme sea levels for broad-scale coastal impact and adaptation analysis

    Science.gov (United States)

    Wahl, T.; Haigh, I. D.; Nicholls, R. J.; Arns, A.; Dangendorf, S.; Hinkel, J.; Slangen, A. B. A.

    2017-07-01

    One of the main consequences of mean sea level rise (SLR) on human settlements is an increase in flood risk due to an increase in the intensity and frequency of extreme sea levels (ESL). While substantial research efforts are directed towards quantifying projections and uncertainties of future global and regional SLR, corresponding uncertainties in contemporary ESL have not been assessed and projections are limited. Here we quantify, for the first time at global scale, the uncertainties in present-day ESL estimates, which have by default been ignored in broad-scale sea-level rise impact assessments to date. ESL uncertainties exceed those from global SLR projections and, assuming that we meet the Paris agreement goals, the projected SLR itself by the end of the century in many regions. Both uncertainties in SLR projections and ESL estimates need to be understood and combined to fully assess potential impacts and adaptation needs.

  3. Auto-Scaling of Geo-Based Image Processing in an OpenStack Cloud Computing Environment

    OpenAIRE

    Sanggoo Kang; Kiwon Lee

    2016-01-01

    Cloud computing is a base platform for the distribution of large volumes of data and high-performance image processing on the Web. Despite wide applications in Web-based services and their many benefits, geo-spatial applications based on cloud computing technology are still developing. Auto-scaling realizes automatic scalability, i.e., the scale-out and scale-in processing of virtual servers in a cloud computing environment. This study investigates the applicability of auto-scaling to geo-bas...

  4. The scaling of population persistence with carrying capacity does not asymptote in populations of a fish experiencing extreme climate variability.

    Science.gov (United States)

    White, Richard S A; Wintle, Brendan A; McHugh, Peter A; Booker, Douglas J; McIntosh, Angus R

    2017-06-14

    Despite growing concerns regarding increasing frequency of extreme climate events and declining population sizes, the influence of environmental stochasticity on the relationship between population carrying capacity and time-to-extinction has received little empirical attention. While time-to-extinction increases exponentially with carrying capacity in constant environments, theoretical models suggest increasing environmental stochasticity causes asymptotic scaling, thus making minimum viable carrying capacity vastly uncertain in variable environments. Using empirical estimates of environmental stochasticity in fish metapopulations, we showed that increasing environmental stochasticity resulting from extreme droughts was insufficient to create asymptotic scaling of time-to-extinction with carrying capacity in local populations as predicted by theory. Local time-to-extinction increased with carrying capacity due to declining sensitivity to demographic stochasticity, and the slope of this relationship declined significantly as environmental stochasticity increased. However, recent 1 in 25 yr extreme droughts were insufficient to extirpate populations with large carrying capacity. Consequently, large populations may be more resilient to environmental stochasticity than previously thought. The lack of carrying capacity-related asymptotes in persistence under extreme climate variability reveals how small populations affected by habitat loss or overharvesting, may be disproportionately threatened by increases in extreme climate events with global warming. © 2017 The Author(s).

  5. Changes in intensity of precipitation extremes in Romania on very hight temporal scale and implications on the validity of the Clausius-Clapeyron relation

    Science.gov (United States)

    Busuioc, Aristita; Baciu, Madalina; Breza, Traian; Dumitrescu, Alexandru; Stoica, Cerasela; Baghina, Nina

    2016-04-01

    Many observational, theoretical and based on climate model simulation studies suggested that warmer climates lead to more intense precipitation events, even when the total annual precipitation is slightly reduced. In this way, it was suggested that extreme precipitation events may increase at Clausius-Clapeyron (CC) rate under global warming and constraint of constant relative humidity. However, recent studies show that the relationship between extreme rainfall intensity and atmospheric temperature is much more complex than would be suggested by the CC relationship and is mainly dependent on precipitation temporal resolution, region, storm type and whether the analysis is conducted on storm events rather than fixed data. The present study presents the dependence between the very hight temporal scale extreme rainfall intensity and daily temperatures, with respect to the verification of the CC relation. To solve this objective, the analysis is conducted on rainfall event rather than fixed interval using the rainfall data based on graphic records including intensities (mm/min.) calculated over each interval with permanent intensity per minute. The annual interval with available a such data (April to October) is considered at 5 stations over the interval 1950-2007. For Bucuresti-Filaret station the analysis is extended over the longer interval (1898-2007). For each rainfall event, the maximum intensity (mm/min.) is retained and these time series are considered for the further analysis (abbreviated in the following as IMAX). The IMAX data were divided based on the daily mean temperature into bins 2oC - wide. The bins with less than 100 values were excluded. The 90th, 99th and 99.9th percentiles were computed from the binned data using the empirical distribution and their variability has been compared to the CC scaling (e.g. exponential relation given by a 7% increase per temperature degree rise). The results show a dependence close to double the CC relation for

  6. XACC - eXtreme-scale Accelerator Programming Framework

    Energy Technology Data Exchange (ETDEWEB)

    2016-11-18

    Hybrid programming models for beyond-CMOS technologies will prove critical for integrating new computing technologies alongside our existing infrastructure. Unfortunately the software infrastructure required to enable this is lacking or not available. XACC is a programming framework for extreme-scale, post-exascale accelerator architectures that integrates alongside existing conventional applications. It is a pluggable framework for programming languages developed for next-gen computing hardware architectures like quantum and neuromorphic computing. It lets computational scientists efficiently off-load classically intractable work to attached accelerators through user-friendly Kernel definitions. XACC makes post-exascale hybrid programming approachable for domain computational scientists.

  7. Final Technical Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    Energy Technology Data Exchange (ETDEWEB)

    Knio, Omar M. [Duke Univ., Durham, NC (United States). Dept. of Mechanical Engineering and Materials Science

    2017-06-06

    QUEST is a SciDAC Institute comprising Sandia National Laboratories, Los Alamos National Laboratory, University of Southern California, Massachusetts Institute of Technology, University of Texas at Austin, and Duke University. The mission of QUEST is to: (1) develop a broad class of uncertainty quantification (UQ) methods/tools, and (2) provide UQ expertise and software to other SciDAC projects, thereby enabling/guiding their UQ activities. The Duke effort focused on the development of algorithms and utility software for non-intrusive sparse UQ representations, and on participation in the organization of annual workshops and tutorials to disseminate UQ tools to the community, and to gather input in order to adapt approaches to the needs of SciDAC customers. In particular, fundamental developments were made in (a) multiscale stochastic preconditioners, (b) gradient-based approaches to inverse problems, (c) adaptive pseudo-spectral approximations, (d) stochastic limit cycles, and (e) sensitivity analysis tools for noisy systems. In addition, large-scale demonstrations were performed, namely in the context of ocean general circulation models.

  8. SCALE-4 [Standardized Computer Analyses for Licensing Evaluation]: An improved computational system for spent-fuel cask analysis

    International Nuclear Information System (INIS)

    Parks, C.V.

    1989-01-01

    The purpose of this paper is to provide specific information regarding improvements available with Version 4.0 of the SCALE system and discuss the future of SCALE within the current computing and regulatory environment. The emphasis focuses on the improvements in SCALE-4 over that available in SCALE-3. 10 refs., 1 fig., 1 tab

  9. Engineering of an Extreme Rainfall Detection System using Grid Computing

    Directory of Open Access Journals (Sweden)

    Olivier Terzo

    2012-10-01

    Full Text Available This paper describes a new approach for intensive rainfall data analysis. ITHACA's Extreme Rainfall Detection System (ERDS is conceived to provide near real-time alerts related to potential exceptional rainfalls worldwide, which can be used by WFP or other humanitarian assistance organizations to evaluate the event and understand the potentially floodable areas where their assistance is needed. This system is based on precipitation analysis and it uses rainfall data from satellite at worldwide extent. This project uses the Tropical Rainfall Measuring Mission Multisatellite Precipitation Analysis dataset, a NASA-delivered near real-time product for current rainfall condition monitoring over the world. Considering the great deal of data to process, this paper presents an architectural solution based on Grid Computing techniques. Our focus is on the advantages of using a distributed architecture in terms of performances for this specific purpose.

  10. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    Science.gov (United States)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  11. Scaling and clustering effects of extreme precipitation distributions

    Science.gov (United States)

    Zhang, Qiang; Zhou, Yu; Singh, Vijay P.; Li, Jianfeng

    2012-08-01

    SummaryOne of the impacts of climate change and human activities on the hydrological cycle is the change in the precipitation structure. Closely related to the precipitation structure are two characteristics: the volume (m) of wet periods (WPs) and the time interval between WPs or waiting time (t). Using daily precipitation data for a period of 1960-2005 from 590 rain gauge stations in China, these two characteristics are analyzed, involving scaling and clustering of precipitation episodes. Our findings indicate that m and t follow similar probability distribution curves, implying that precipitation processes are controlled by similar underlying thermo-dynamics. Analysis of conditional probability distributions shows a significant dependence of m and t on their previous values of similar volumes, and the dependence tends to be stronger when m is larger or t is longer. It indicates that a higher probability can be expected when high-intensity precipitation is followed by precipitation episodes with similar precipitation intensity and longer waiting time between WPs is followed by the waiting time of similar duration. This result indicates the clustering of extreme precipitation episodes and severe droughts or floods are apt to occur in groups.

  12. Towards an integrated multiscale simulation of turbulent clouds on PetaScale computers

    International Nuclear Information System (INIS)

    Wang Lianping; Ayala, Orlando; Parishani, Hossein; Gao, Guang R; Kambhamettu, Chandra; Li Xiaoming; Rossi, Louis; Orozco, Daniel; Torres, Claudio; Grabowski, Wojciech W; Wyszogrodzki, Andrzej A; Piotrowski, Zbigniew

    2011-01-01

    The development of precipitating warm clouds is affected by several effects of small-scale air turbulence including enhancement of droplet-droplet collision rate by turbulence, entrainment and mixing at the cloud edges, and coupling of mechanical and thermal energies at various scales. Large-scale computation is a viable research tool for quantifying these multiscale processes. Specifically, top-down large-eddy simulations (LES) of shallow convective clouds typically resolve scales of turbulent energy-containing eddies while the effects of turbulent cascade toward viscous dissipation are parameterized. Bottom-up hybrid direct numerical simulations (HDNS) of cloud microphysical processes resolve fully the dissipation-range flow scales but only partially the inertial subrange scales. it is desirable to systematically decrease the grid length in LES and increase the domain size in HDNS so that they can be better integrated to address the full range of scales and their coupling. In this paper, we discuss computational issues and physical modeling questions in expanding the ranges of scales realizable in LES and HDNS, and in bridging LES and HDNS. We review our on-going efforts in transforming our simulation codes towards PetaScale computing, in improving physical representations in LES and HDNS, and in developing better methods to analyze and interpret the simulation results.

  13. Auto-Scaling of Geo-Based Image Processing in an OpenStack Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Sanggoo Kang

    2016-08-01

    Full Text Available Cloud computing is a base platform for the distribution of large volumes of data and high-performance image processing on the Web. Despite wide applications in Web-based services and their many benefits, geo-spatial applications based on cloud computing technology are still developing. Auto-scaling realizes automatic scalability, i.e., the scale-out and scale-in processing of virtual servers in a cloud computing environment. This study investigates the applicability of auto-scaling to geo-based image processing algorithms by comparing the performance of a single virtual server and multiple auto-scaled virtual servers under identical experimental conditions. In this study, the cloud computing environment is built with OpenStack, and four algorithms from the Orfeo toolbox are used for practical geo-based image processing experiments. The auto-scaling results from all experimental performance tests demonstrate applicable significance with respect to cloud utilization concerning response time. Auto-scaling contributes to the development of web-based satellite image application services using cloud-based technologies.

  14. Extreme weather events in southern Germany - Climatological risk and development of a large-scale identification procedure

    Science.gov (United States)

    Matthies, A.; Leckebusch, G. C.; Rohlfing, G.; Ulbrich, U.

    2009-04-01

    Extreme weather events such as thunderstorms, hail and heavy rain or snowfall can pose a threat to human life and to considerable tangible assets. Yet there is a lack of knowledge about present day climatological risk and its economic effects, and its changes due to rising greenhouse gas concentrations. Therefore, parts of economy particularly sensitve to extreme weather events such as insurance companies and airports require regional risk-analyses, early warning and prediction systems to cope with such events. Such an attempt is made for southern Germany, in close cooperation with stakeholders. Comparing ERA40 and station data with impact records of Munich Re and Munich Airport, the 90th percentile was found to be a suitable threshold for extreme impact relevant precipitation events. Different methods for the classification of causing synoptic situations have been tested on ERA40 reanalyses. An objective scheme for the classification of Lamb's circulation weather types (CWT's) has proved to be most suitable for correct classification of the large-scale flow conditions. Certain CWT's have been turned out to be prone to heavy precipitation or on the other side to have a very low risk of such events. Other large-scale parameters are tested in connection with CWT's to find out a combination that has the highest skill to identify extreme precipitation events in climate model data (ECHAM5 and CLM). For example vorticity advection in 700 hPa shows good results, but assumes knowledge of regional orographic particularities. Therefore ongoing work is focused on additional testing of parameters that indicate deviations of a basic state of the atmosphere like the Eady Growth Rate or the newly developed Dynamic State Index. Evaluation results will be used to estimate the skill of the regional climate model CLM concerning the simulation of frequency and intensity of the extreme weather events. Data of the A1B scenario (2000-2050) will be examined for a possible climate change

  15. Identification of discrete vascular lesions in the extremities using post-mortem computed tomography angiography – Case reports

    NARCIS (Netherlands)

    Haakma, Wieke; Rohde, Marianne; Uhrenholt, Lars; Pedersen, Michael; Boel, Lene Warner Thorup

    2017-01-01

    In this case report, we introduced post-mortem computed tomography angiography (PMCTA) in three cases suffering from vascular lesions in the upper extremities. In each subject, the third part of the axillary arteries and veins were used to catheterize the arms. The vessels were filled with a barium

  16. The Convergence of High Performance Computing and Large Scale Data Analytics

    Science.gov (United States)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  17. AN AUTOMATIC DETECTION METHOD FOR EXTREME-ULTRAVIOLET DIMMINGS ASSOCIATED WITH SMALL-SCALE ERUPTION

    Energy Technology Data Exchange (ETDEWEB)

    Alipour, N.; Safari, H. [Department of Physics, University of Zanjan, P.O. Box 45195-313, Zanjan (Iran, Islamic Republic of); Innes, D. E. [Max-Planck Institut fuer Sonnensystemforschung, 37191 Katlenburg-Lindau (Germany)

    2012-02-10

    Small-scale extreme-ultraviolet (EUV) dimming often surrounds sites of energy release in the quiet Sun. This paper describes a method for the automatic detection of these small-scale EUV dimmings using a feature-based classifier. The method is demonstrated using sequences of 171 Angstrom-Sign images taken by the STEREO/Extreme UltraViolet Imager (EUVI) on 2007 June 13 and by Solar Dynamics Observatory/Atmospheric Imaging Assembly on 2010 August 27. The feature identification relies on recognizing structure in sequences of space-time 171 Angstrom-Sign images using the Zernike moments of the images. The Zernike moments space-time slices with events and non-events are distinctive enough to be separated using a support vector machine (SVM) classifier. The SVM is trained using 150 events and 700 non-event space-time slices. We find a total of 1217 events in the EUVI images and 2064 events in the AIA images on the days studied. Most of the events are found between latitudes -35 Degree-Sign and +35 Degree-Sign . The sizes and expansion speeds of central dimming regions are extracted using a region grow algorithm. The histograms of the sizes in both EUVI and AIA follow a steep power law with slope of about -5. The AIA slope extends to smaller sizes before turning over. The mean velocity of 1325 dimming regions seen by AIA is found to be about 14 km s{sup -1}.

  18. Final Report Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    Energy Technology Data Exchange (ETDEWEB)

    O' Leary, Patrick [Kitware, Inc., Clifton Park, NY (United States)

    2017-09-13

    The primary challenge motivating this project is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who can perform analysis only on a small fraction of the data they calculate, resulting in the substantial likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, which is known as in situ processing. The idea in situ processing was not new at the time of the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by Department of Energy (DOE) science projects. Our objective was to produce and enable the use of production-quality in situ methods and infrastructure, at scale, on DOE high-performance computing (HPC) facilities, though we expected to have an impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve this objective, we engaged in software technology research and development (R&D), in close partnerships with DOE science code teams, to produce software technologies that were shown to run efficiently at scale on DOE HPC platforms.

  19. Proceedings of the meeting on large scale computer simulation research

    International Nuclear Information System (INIS)

    2004-04-01

    The meeting to summarize the collaboration activities for FY2003 on the Large Scale Computer Simulation Research was held January 15-16, 2004 at Theory and Computer Simulation Research Center, National Institute for Fusion Science. Recent simulation results, methodologies and other related topics were presented. (author)

  20. Standardizing Scale Height Computation of Maven Ngims Neutral Data and Variations Between Exobase and Homeopause Scale Heights

    Science.gov (United States)

    Elrod, M. K.; Slipski, M.; Curry, S.; Williamson, H. N.; Benna, M.; Mahaffy, P. R.

    2017-12-01

    The MAVEN NGIMS team produces a level 3 product which includes the computation of Ar scale height an atmospheric temperatures at 200 km. In the latest version (v05_r01) this has been revised to include scale height fits for CO2, N2 O and CO. Members of the MAVEN team have used various methods to compute scale heights leading to significant variations in scale height values depending on fits and techniques within a few orbits even, occasionally, the same pass. Additionally fitting scale heights in a very stable atmosphere like the day side vs night side can have different results based on boundary conditions. Currently, most methods only compute Ar scale heights as it is most stable and reacts least with the instrument. The NGIMS team has chosen to expand these fitting techniques to include fitted scale heights for CO2, N2, CO, and O. Having compared multiple techniques, the method found to be most reliable for most conditions was determined to be a simple fit method. We have focused this to a fitting method that determines the exobase altidude of the CO2 atmosphere as a maximum altitude for the highest point for fitting, and uses the periapsis as the lowest point and then fits the altitude versus log(density). The slope of altitude vs log(density) is -1/H where H is the scale height of the atmosphere for each species. Since this is between the homeopause and the exobase, each species will have a different scale height by this point. This is being released as a new standardization for the level 3 product, with the understanding that scientists and team members will continue to compute more precise scale heights and temperatures as needed based on science and model demands. This is being released in the PDS NGIMS level 3 v05 files for August 2017. Additionally, we are examining these scale heights for variations seasonally, diurnally, and above and below the exobase. The atmosphere is significantly more stable on the dayside than on the nightside. We have also found

  1. Reliability in Warehouse-Scale Computing: Why Low Latency Matters

    DEFF Research Database (Denmark)

    Nannarelli, Alberto

    2015-01-01

    , the limiting factor of these warehouse-scale data centers is the power dissipation. Power is dissipated not only in the computation itself, but also in heat removal (fans, air conditioning, etc.) to keep the temperature of the devices within the operating ranges. The need to keep the temperature low within......Warehouse sized buildings are nowadays hosting several types of large computing systems: from supercomputers to large clusters of servers to provide the infrastructure to the cloud. Although the main target, especially for high-performance computing, is still to achieve high throughput...

  2. Computing the universe: how large-scale simulations illuminate galaxies and dark energy

    Science.gov (United States)

    O'Shea, Brian

    2015-04-01

    High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.

  3. Detecting Silent Data Corruption for Extreme-Scale Applications through Data Mining

    Energy Technology Data Exchange (ETDEWEB)

    Bautista-Gomez, Leonardo [Argonne National Lab. (ANL), Argonne, IL (United States); Cappello, Franck [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-01-16

    Supercomputers allow scientists to study natural phenomena by means of computer simulations. Next-generation machines are expected to have more components and, at the same time, consume several times less energy per operation. These trends are pushing supercomputer construction to the limits of miniaturization and energy-saving strategies. Consequently, the number of soft errors is expected to increase dramatically in the coming years. While mechanisms are in place to correct or at least detect some soft errors, a significant percentage of those errors pass unnoticed by the hardware. Such silent errors are extremely damaging because they can make applications silently produce wrong results. In this work we propose a technique that leverages certain properties of high-performance computing applications in order to detect silent errors at the application level. Our technique detects corruption solely based on the behavior of the application datasets and is completely application-agnostic. We propose multiple corruption detectors, and we couple them to work together in a fashion transparent to the user. We demonstrate that this strategy can detect the majority of the corruptions, while incurring negligible overhead. We show that with the help of these detectors, applications can have up to 80% of coverage against data corruption.

  4. Influence of climate variability versus change at multi-decadal time scales on hydrological extremes

    Science.gov (United States)

    Willems, Patrick

    2014-05-01

    Recent studies have shown that rainfall and hydrological extremes do not randomly occur in time, but are subject to multidecadal oscillations. In addition to these oscillations, there are temporal trends due to climate change. Design statistics, such as intensity-duration-frequency (IDF) for extreme rainfall or flow-duration-frequency (QDF) relationships, are affected by both types of temporal changes (short term and long term). This presentation discusses these changes, how they influence water engineering design and decision making, and how this influence can be assessed and taken into account in practice. The multidecadal oscillations in rainfall and hydrological extremes were studied based on a technique for the identification and analysis of changes in extreme quantiles. The statistical significance of the oscillations was evaluated by means of a non-parametric bootstrapping method. Oscillations in large scale atmospheric circulation were identified as the main drivers for the temporal oscillations in rainfall and hydrological extremes. They also explain why spatial phase shifts (e.g. north-south variations in Europe) exist between the oscillation highs and lows. Next to the multidecadal climate oscillations, several stations show trends during the most recent decades, which may be attributed to climate change as a result of anthropogenic global warming. Such attribution to anthropogenic global warming is, however, uncertain. It can be done based on simulation results with climate models, but it is shown that the climate model results are too uncertain to enable a clear attribution. Water engineering design statistics, such as extreme rainfall IDF or peak or low flow QDF statistics, obviously are influenced by these temporal variations (oscillations, trends). It is shown in the paper, based on the Brussels 10-minutes rainfall data, that rainfall design values may be about 20% biased or different when based on short rainfall series of 10 to 15 years length, and

  5. A multiple-scaling method of the computation of threaded structures

    International Nuclear Information System (INIS)

    Andrieux, S.; Leger, A.

    1989-01-01

    The numerical computation of threaded structures usually leads to very large finite elements problems. It was therefore very difficult to carry out some parametric studies, especially in non-linear cases involving plasticity or unilateral contact conditions. Nevertheless, these parametric studies are essential in many industrial problems, for instance for the evaluation of various repairing processes of the closure studs of PWR. It is well known that such repairing generally involves several modifications of the thread geometry, of the number of active threads, of the flange clamping conditions, and so on. This paper is devoted to the description of a two-scale method, which easily allows parametric studies. The main idea of this method consists of dividing the problem into a global part, and a local part. The local problem is solved by F.E.M. on the precise geometry of the thread of some elementary loadings. The global one is formulated on the gudgeon scale and is reduced to a monodimensional one. The resolution of this global problem leads to the unsignificant computational cost. Then, a post-processing gives the stress field at the thread scale anywhere in the assembly. After recalling some principles of the two-scales approach, the method is described. The validation by comparison with a direct F.E. computation and some further applications are presented

  6. Parallel multiple instance learning for extremely large histopathology image analysis.

    Science.gov (United States)

    Xu, Yan; Li, Yeshu; Shen, Zhengyang; Wu, Ziwei; Gao, Teng; Fan, Yubo; Lai, Maode; Chang, Eric I-Chao

    2017-08-03

    Histopathology images are critical for medical diagnosis, e.g., cancer and its treatment. A standard histopathology slice can be easily scanned at a high resolution of, say, 200,000×200,000 pixels. These high resolution images can make most existing imaging processing tools infeasible or less effective when operated on a single machine with limited memory, disk space and computing power. In this paper, we propose an algorithm tackling this new emerging "big data" problem utilizing parallel computing on High-Performance-Computing (HPC) clusters. Experimental results on a large-scale data set (1318 images at a scale of 10 billion pixels each) demonstrate the efficiency and effectiveness of the proposed algorithm for low-latency real-time applications. The framework proposed an effective and efficient system for extremely large histopathology image analysis. It is based on the multiple instance learning formulation for weakly-supervised learning for image classification, segmentation and clustering. When a max-margin concept is adopted for different clusters, we obtain further improvement in clustering performance.

  7. Spatial extreme value analysis to project extremes of large-scale indicators for severe weather.

    Science.gov (United States)

    Gilleland, Eric; Brown, Barbara G; Ammann, Caspar M

    2013-09-01

    Concurrently high values of the maximum potential wind speed of updrafts ( W max ) and 0-6 km wind shear (Shear) have been found to represent conducive environments for severe weather, which subsequently provides a way to study severe weather in future climates. Here, we employ a model for the product of these variables (WmSh) from the National Center for Atmospheric Research/United States National Center for Environmental Prediction reanalysis over North America conditioned on their having extreme energy in the spatial field in order to project the predominant spatial patterns of WmSh. The approach is based on the Heffernan and Tawn conditional extreme value model. Results suggest that this technique estimates the spatial behavior of WmSh well, which allows for exploring possible changes in the patterns over time. While the model enables a method for inferring the uncertainty in the patterns, such analysis is difficult with the currently available inference approach. A variation of the method is also explored to investigate how this type of model might be used to qualitatively understand how the spatial patterns of WmSh correspond to extreme river flow events. A case study for river flows from three rivers in northwestern Tennessee is studied, and it is found that advection of WmSh from the Gulf of Mexico prevails while elsewhere, WmSh is generally very low during such extreme events. © 2013 The Authors. Environmetrics published by JohnWiley & Sons, Ltd.

  8. Topology-oblivious optimization of MPI broadcast algorithms on extreme-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2015-11-01

    © 2015 Elsevier B.V. All rights reserved. Significant research has been conducted in collective communication operations, in particular in MPI broadcast, on distributed memory platforms. Most of the research efforts aim to optimize the collective operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a simple but general approach to optimization of the legacy MPI broadcast algorithms, which are widely used in MPICH and Open MPI. The proposed optimization technique is designed to address the challenge of extreme scale of future HPC platforms. It is based on hierarchical transformation of the traditionally flat logical arrangement of communicating processors. Theoretical analysis and experimental results on IBM BlueGene/P and a cluster of the Grid\\'5000 platform are presented.

  9. Reliability, validity, and sensitivity to change of the lower extremity functional scale in individuals affected by stroke.

    Science.gov (United States)

    Verheijde, Joseph L; White, Fred; Tompkins, James; Dahl, Peder; Hentz, Joseph G; Lebec, Michael T; Cornwall, Mark

    2013-12-01

    To investigate reliability, validity, and sensitivity to change of the Lower Extremity Functional Scale (LEFS) in individuals affected by stroke. The secondary objective was to test the validity and sensitivity of a single-item linear analog scale (LAS) of function. Prospective cohort reliability and validation study. A single rehabilitation department in an academic medical center. Forty-three individuals receiving neurorehabilitation for lower extremity dysfunction after stroke were studied. Their ages ranged from 32 to 95 years, with a mean of 70 years; 77% were men. Test-retest reliability was assessed by calculating the classical intraclass correlation coefficient, and the Bland-Altman limits of agreement. Validity was assessed by calculating the Pearson correlation coefficient between the instruments. Sensitivity to change was assessed by comparing baseline scores with end of treatment scores. Measurements were taken at baseline, after 1-3 days, and at 4 and 8 weeks. The LEFS, Short-Form-36 Physical Function Scale, Berg Balance Scale, Six-Minute Walk Test, Five-Meter Walk Test, Timed Up-and-Go test, and the LAS of function were used. The test-retest reliability of the LEFS was found to be excellent (ICC = 0.96). Correlated with the 6 other measures of function studied, the validity of the LEFS was found to be moderate to high (r = 0.40-0.71). Regarding the sensitivity to change, the mean LEFS scores from baseline to study end increased 1.2 SD and for LAS 1.1 SD. LEFS exhibits good reliability, validity, and sensitivity to change in patients with lower extremity impairments secondary to stroke. Therefore, the LEFS can be a clinically efficient outcome measure in the rehabilitation of patients with subacute stroke. The LAS is shown to be a time-saving and reasonable option to track changes in a patient's functional status. Copyright © 2013 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  10. Contribution of large-scale circulation anomalies to changes in extreme precipitation frequency in the United States

    International Nuclear Information System (INIS)

    Yu, Lejiang; Zhong, Shiyuan; Pei, Lisi; Bian, Xindi; Heilman, Warren E

    2016-01-01

    The mean global climate has warmed as a result of the increasing emission of greenhouse gases induced by human activities. This warming is considered the main reason for the increasing number of extreme precipitation events in the US. While much attention has been given to extreme precipitation events occurring over several days, which are usually responsible for severe flooding over a large region, little is known about how extreme precipitation events that cause flash flooding and occur at sub-daily time scales have changed over time. Here we use the observed hourly precipitation from the North American Land Data Assimilation System Phase 2 forcing datasets to determine trends in the frequency of extreme precipitation events of short (1 h, 3 h, 6 h, 12 h and 24 h) duration for the period 1979–2013. The results indicate an increasing trend in the central and eastern US. Over most of the western US, especially the Southwest and the Intermountain West, the trends are generally negative. These trends can be largely explained by the interdecadal variability of the Pacific Decadal Oscillation and Atlantic Multidecadal Oscillation (AMO), with the AMO making a greater contribution to the trends in both warm and cold seasons. (letter)

  11. Extreme weather: Subtropical floods and tropical cyclones

    Science.gov (United States)

    Shaevitz, Daniel A.

    Extreme weather events have a large effect on society. As such, it is important to understand these events and to project how they may change in a future, warmer climate. The aim of this thesis is to develop a deeper understanding of two types of extreme weather events: subtropical floods and tropical cyclones (TCs). In the subtropics, the latitude is high enough that quasi-geostrophic dynamics are at least qualitatively relevant, while low enough that moisture may be abundant and convection strong. Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. In the first part of this thesis, I examine the possible triggering of convection by the large-scale dynamics and investigate the coupling between the two. Specifically two examples of extreme precipitation events in the subtropics are analyzed, the 2010 and 2014 floods of India and Pakistan and the 2015 flood of Texas and Oklahoma. I invert the quasi-geostrophic omega equation to decompose the large-scale vertical motion profile to components due to synoptic forcing and diabatic heating. Additionally, I present model results from within the Column Quasi-Geostrophic framework. A single column model and cloud-revolving model are forced with the large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation with input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. It is found that convection was triggered primarily by mechanically forced orographic ascent over the Himalayas during the India/Pakistan flood and by upper-level Potential Vorticity disturbances during the Texas/Oklahoma flood. Furthermore, a climate attribution analysis was conducted for the Texas/Oklahoma flood and it is found that anthropogenic climate change was responsible for a small amount of rainfall during the event but the

  12. Application of parallel computing techniques to a large-scale reservoir simulation

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2001-01-01

    Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance

  13. How extreme is extreme hourly precipitation?

    Science.gov (United States)

    Papalexiou, Simon Michael; Dialynas, Yannis G.; Pappas, Christoforos

    2016-04-01

    The importance of accurate representation of precipitation at fine time scales (e.g., hourly), directly associated with flash flood events, is crucial in hydrological design and prediction. The upper part of a probability distribution, known as the distribution tail, determines the behavior of extreme events. In general, and loosely speaking, tails can be categorized in two families: the subexponential and the hyperexponential family, with the first generating more intense and more frequent extremes compared to the latter. In past studies, the focus has been mainly on daily precipitation, with the Gamma distribution being the most popular model. Here, we investigate the behaviour of tails of hourly precipitation by comparing the upper part of empirical distributions of thousands of records with three general types of tails corresponding to the Pareto, Lognormal, and Weibull distributions. Specifically, we use thousands of hourly rainfall records from all over the USA. The analysis indicates that heavier-tailed distributions describe better the observed hourly rainfall extremes in comparison to lighter tails. Traditional representations of the marginal distribution of hourly rainfall may significantly deviate from observed behaviours of extremes, with direct implications on hydroclimatic variables modelling and engineering design.

  14. Scaling predictive modeling in drug development with cloud computing.

    Science.gov (United States)

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  15. Direct Computation of Sound Radiation by Jet Flow Using Large-scale Equations

    Science.gov (United States)

    Mankbadi, R. R.; Shih, S. H.; Hixon, D. R.; Povinelli, L. A.

    1995-01-01

    Jet noise is directly predicted using large-scale equations. The computational domain is extended in order to directly capture the radiated field. As in conventional large-eddy-simulations, the effect of the unresolved scales on the resolved ones is accounted for. Special attention is given to boundary treatment to avoid spurious modes that can render the computed fluctuations totally unacceptable. Results are presented for a supersonic jet at Mach number 2.1.

  16. Enabling Wide-Scale Computer Science Education through Improved Automated Assessment Tools

    Science.gov (United States)

    Boe, Bryce A.

    There is a proliferating demand for newly trained computer scientists as the number of computer science related jobs continues to increase. University programs will only be able to train enough new computer scientists to meet this demand when two things happen: when there are more primary and secondary school students interested in computer science, and when university departments have the resources to handle the resulting increase in enrollment. To meet these goals, significant effort is being made to both incorporate computational thinking into existing primary school education, and to support larger university computer science class sizes. We contribute to this effort through the creation and use of improved automated assessment tools. To enable wide-scale computer science education we do two things. First, we create a framework called Hairball to support the static analysis of Scratch programs targeted for fourth, fifth, and sixth grade students. Scratch is a popular building-block language utilized to pique interest in and teach the basics of computer science. We observe that Hairball allows for rapid curriculum alterations and thus contributes to wide-scale deployment of computer science curriculum. Second, we create a real-time feedback and assessment system utilized in university computer science classes to provide better feedback to students while reducing assessment time. Insights from our analysis of student submission data show that modifications to the system configuration support the way students learn and progress through course material, making it possible for instructors to tailor assignments to optimize learning in growing computer science classes.

  17. Consolidation of cloud computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Cordeiro, Cristovao; Hover, John; Kouba, Tomas; Love, Peter; Mcnab, Andrew; Schovancova, Jaroslava; Sobie, Randall; Giordano, Domenico

    2017-01-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in resp...

  18. Large-scale computation in solid state physics - Recent developments and prospects

    International Nuclear Information System (INIS)

    DeVreese, J.T.

    1985-01-01

    During the past few years an increasing interest in large-scale computation is developing. Several initiatives were taken to evaluate and exploit the potential of ''supercomputers'' like the CRAY-1 (or XMP) or the CYBER-205. In the U.S.A., there first appeared the Lax report in 1982 and subsequently (1984) the National Science Foundation in the U.S.A. announced a program to promote large-scale computation at the universities. Also, in Europe several CRAY- and CYBER-205 systems have been installed. Although the presently available mainframes are the result of a continuous growth in speed and memory, they might have induced a discontinuous transition in the evolution of the scientific method; between theory and experiment a third methodology, ''computational science'', has become or is becoming operational

  19. Microstructural analysis of TRISO particles using multi-scale X-ray computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lowe, T., E-mail: tristan.lowe@manchester.ac.uk [Manchester X-ray Imaging Facility, School of Materials, University of Manchester, M13 9PL (United Kingdom); Bradley, R.S. [Manchester X-ray Imaging Facility, School of Materials, University of Manchester, M13 9PL (United Kingdom); Yue, S. [Manchester X-ray Imaging Facility, School of Materials, University of Manchester, M13 9PL (United Kingdom); The Research Complex at Harwell, Rutherford Appleton Laboratory, Didcot, Oxfordshire OX11 0FA (United Kingdom); Barii, K. [School of Mechanical Engineering, University of Manchester, M13 9PL (United Kingdom); Gelb, J. [Zeiss Xradia Inc., Pleasanton, CA (United States); Rohbeck, N. [Manchester X-ray Imaging Facility, School of Materials, University of Manchester, M13 9PL (United Kingdom); Turner, J. [School of Mechanical Engineering, University of Manchester, M13 9PL (United Kingdom); Withers, P.J. [Manchester X-ray Imaging Facility, School of Materials, University of Manchester, M13 9PL (United Kingdom); The Research Complex at Harwell, Rutherford Appleton Laboratory, Didcot, Oxfordshire OX11 0FA (United Kingdom)

    2015-06-15

    TRISO particles, a composite nuclear fuel built up by ceramic and graphitic layers, have outstanding high temperature resistance. TRISO fuel is the key technology for High Temperature Reactors (HTRs) and the Generation IV Very High Temperature Reactor (VHTR) variant. TRISO offers unparalleled containment of fission products and is extremely robust during accident conditions. An understanding of the thermal performance and mechanical properties of TRISO fuel requires a detailed knowledge of pore sizes, their distribution and interconnectivity. Here 50 nm, nano-, and 1 μm resolution, micro-computed tomography (CT), have been used to quantify non-destructively porosity of a surrogate TRISO particle at the 0.3–10 μm and 3–100 μm scales respectively. This indicates that pore distributions can reliably be measured down to a size approximately 3 times the pixel size which is consistent with the segmentation process. Direct comparison with Scanning Electron Microscopy (SEM) sections indicates that destructive sectioning can introduce significant levels of coarse damage, especially in the pyrolytic carbon layers. Further comparative work is required to identify means of minimizing such damage for SEM studies. Finally since it is non-destructive, multi-scale time-lapse X-ray CT opens the possibility of intermittently tracking the degradation of TRISO structure under thermal cycles or radiation conditions in order to validate models of degradation such as kernel movement. X-ray CT in-situ experimentation of TRISO particles under load and temperature could also be used to understand the internal changes that occur in the particles under accident conditions.

  20. Investigating NARCCAP Precipitation Extremes via Bivariate Extreme Value Theory (Invited)

    Science.gov (United States)

    Weller, G. B.; Cooley, D. S.; Sain, S. R.; Bukovsky, M. S.; Mearns, L. O.

    2013-12-01

    We introduce methodology from statistical extreme value theory to examine the ability of reanalysis-drive regional climate models to simulate past daily precipitation extremes. Going beyond a comparison of summary statistics such as 20-year return values, we study whether the most extreme precipitation events produced by climate model simulations exhibit correspondence to the most extreme events seen in observational records. The extent of this correspondence is formulated via the statistical concept of tail dependence. We examine several case studies of extreme precipitation events simulated by the six models of the North American Regional Climate Change Assessment Program (NARCCAP) driven by NCEP reanalysis. It is found that the NARCCAP models generally reproduce daily winter precipitation extremes along the Pacific coast quite well; in contrast, simulation of past daily summer precipitation extremes in a central US region is poor. Some differences in the strength of extremal correspondence are seen in the central region between models which employ spectral nudging and those which do not. We demonstrate how these techniques may be used to draw a link between extreme precipitation events and large-scale atmospheric drivers, as well as to downscale extreme precipitation simulated by a future run of a regional climate model. Specifically, we examine potential future changes in the nature of extreme precipitation along the Pacific coast produced by the pineapple express (PE) phenomenon. A link between extreme precipitation events and a "PE Index" derived from North Pacific sea-surface pressure fields is found. This link is used to study PE-influenced extreme precipitation produced by a future-scenario climate model run.

  1. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  2. [Computer mediated discussion and attitude polarization].

    Science.gov (United States)

    Shiraishi, Takashi; Endo, Kimihisa; Yoshida, Fujio

    2002-10-01

    This study examined the hypothesis that computer mediated discussions lead to more extreme decisions than face-to-face (FTF) meeting. Kiesler, Siegel, & McGuire (1984) claimed that computer mediated communication (CMC) tended to be relatively uninhibited, as seen in 'flaming', and that group decisions under CMC using Choice Dilemma Questionnaire tended to be more extreme and riskier than FTF meetings. However, for the same reason, CMC discussions on controversial social issues for which participants initially hold strongly opposing views, might be less likely to reach a consensus, and no polarization should occur. Fifteen 4-member groups discussed a controversial social issue under one of three conditions: FTF, CMC, and partition. After discussion, participants rated their position as a group on a 9-point bipolar scale ranging from strong disagreement to strong agreement. A stronger polarization effect was observed for FTF groups than those where members were separated with partitions. However, no extreme shift from their original, individual positions was found for CMC participants. There results were discussed in terms of 'expertise and status equalization' and 'absence of social context cues' under CMC.

  3. Scaling to Nanotechnology Limits with the PIMS Computer Architecture and a new Scaling Rule

    Energy Technology Data Exchange (ETDEWEB)

    Debenedictis, Erik P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    We describe a new approach to computing that moves towards the limits of nanotechnology using a newly formulated sc aling rule. This is in contrast to the current computer industry scali ng away from von Neumann's original computer at the rate of Moore's Law. We extend Moore's Law to 3D, which l eads generally to architectures that integrate logic and memory. To keep pow er dissipation cons tant through a 2D surface of the 3D structure requires using adiabatic principles. We call our newly proposed architecture Processor In Memory and Storage (PIMS). We propose a new computational model that integrates processing and memory into "tiles" that comprise logic, memory/storage, and communications functions. Since the programming model will be relatively stable as a system scales, programs repr esented by tiles could be executed in a PIMS system built with today's technology or could become the "schematic diagram" for implementation in an ultimate 3D nanotechnology of the future. We build a systems software approach that offers advantages over and above the technological and arch itectural advantages. Firs t, the algorithms may be more efficient in the conventional sens e of having fewer steps. Second, the algorithms may run with higher power efficiency per operation by being a better match for the adiabatic scaling ru le. The performance analysis based on demonstrated ideas in physical science suggests 80,000 x improvement in cost per operation for the (arguably) gene ral purpose function of emulating neurons in Deep Learning.

  4. Use of computer games as an intervention for stroke.

    Science.gov (United States)

    Proffitt, Rachel M; Alankus, Gazihan; Kelleher, Caitlin L; Engsberg, Jack R

    2011-01-01

    Current rehabilitation for persons with hemiparesis after stroke requires high numbers of repetitions to be in accordance with contemporary motor learning principles. The motivational characteristics of computer games can be harnessed to create engaging interventions for persons with hemiparesis after stroke that incorporate this high number of repetitions. The purpose of this case report was to test the feasibility of using computer games as a 6-week home therapy intervention to improve upper extremity function for a person with stroke. One person with left upper extremity hemiparesis after stroke participated in a 6-week home therapy computer game intervention. The games were customized to her preferences and abilities and modified weekly. Her performance was tracked and analyzed. Data from pre-, mid-, and postintervention testing using standard upper extremity measures and the Reaching Performance Scale (RPS) were analyzed. After 3 weeks, the participant demonstrated increased upper extremity range of motion at the shoulder and decreased compensatory trunk movements during reaching tasks. After 6 weeks, she showed functional gains in activities of daily living (ADLs) and instrumental ADLs despite no further improvements on the RPS. Results indicate that computer games have the potential to be a useful intervention for people with stroke. Future work will add additional support to quantify the effectiveness of the games as a home therapy intervention for persons with stroke.

  5. Attitude extremity, consensus and diagnosticity

    NARCIS (Netherlands)

    van der Pligt, J.; Ester, P.; van der Linden, J.

    1983-01-01

    Studied the effects of attitude extremity on perceived consensus and willingness to ascribe trait terms to others with either pro- or antinuclear attitudes. 611 Ss rated their attitudes toward nuclear energy on a 5-point scale. Results show that attitude extremity affected consensus estimates. Trait

  6. Automation Rover for Extreme Environments

    Science.gov (United States)

    Sauder, Jonathan; Hilgemann, Evan; Johnson, Michael; Parness, Aaron; Hall, Jeffrey; Kawata, Jessie; Stack, Kathryn

    2017-01-01

    Almost 2,300 years ago the ancient Greeks built the Antikythera automaton. This purely mechanical computer accurately predicted past and future astronomical events long before electronics existed1. Automata have been credibly used for hundreds of years as computers, art pieces, and clocks. However, in the past several decades automata have become less popular as the capabilities of electronics increased, leaving them an unexplored solution for robotic spacecraft. The Automaton Rover for Extreme Environments (AREE) proposes an exciting paradigm shift from electronics to a fully mechanical system, enabling longitudinal exploration of the most extreme environments within the solar system.

  7. Dual-Energy Computed Tomography Angiography of the Lower Extremity Runoff: Impact of Noise-Optimized Virtual Monochromatic Imaging on Image Quality and Diagnostic Accuracy.

    Science.gov (United States)

    Wichmann, Julian L; Gillott, Matthew R; De Cecco, Carlo N; Mangold, Stefanie; Varga-Szemes, Akos; Yamada, Ricardo; Otani, Katharina; Canstein, Christian; Fuller, Stephen R; Vogl, Thomas J; Todoran, Thomas M; Schoepf, U Joseph

    2016-02-01

    The aim of this study was to evaluate the impact of a noise-optimized virtual monochromatic imaging algorithm (VMI+) on image quality and diagnostic accuracy at dual-energy computed tomography angiography (CTA) of the lower extremity runoff. This retrospective Health Insurance Portability and Accountability Act-compliant study was approved by the local institutional review board. We evaluated dual-energy CTA studies of the lower extremity runoff in 48 patients (16 women; mean age, 63.3 ± 13.8 years) performed on a third-generation dual-source CT system. Images were reconstructed with standard linear blending (F_0.5), VMI+, and traditional monochromatic (VMI) algorithms at 40 to 120 keV in 10-keV intervals. Vascular attenuation and image noise in 18 artery segments were measured; signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated. Five-point scales were used to subjectively evaluate vascular attenuation and image noise. In a subgroup of 21 patients who underwent additional invasive catheter angiography, diagnostic accuracy for the detection of significant stenosis (≥50% lumen restriction) of F_0.5, 50-keV VMI+, and 60-keV VMI data sets were assessed. Objective image quality metrics were highest in the 40- and 50-keV VMI+ series (SNR: 20.2 ± 10.7 and 19.0 ± 9.5, respectively; CNR: 18.5 ± 10.3 and 16.8 ± 9.1, respectively) and were significantly (all P traditional VMI technique and standard linear blending for evaluation of the lower extremity runoff using dual-energy CTA.

  8. Consolidation of cloud computing in ATLAS

    Science.gov (United States)

    Taylor, Ryan P.; Domingues Cordeiro, Cristovao Jose; Giordano, Domenico; Hover, John; Kouba, Tomas; Love, Peter; McNab, Andrew; Schovancova, Jaroslava; Sobie, Randall; ATLAS Collaboration

    2017-10-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems.

  9. Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud

    Directory of Open Access Journals (Sweden)

    A. Paulin Florence

    2016-01-01

    Full Text Available Cloud computing is a new technology which supports resource sharing on a “Pay as you go” basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption.

  10. Stereology of extremes; bivariate models and computation

    Czech Academy of Sciences Publication Activity Database

    Beneš, Viktor; Bodlák, M.; Hlubinka, D.

    2003-01-01

    Roč. 5, č. 3 (2003), s. 289-308 ISSN 1387-5841 R&D Projects: GA AV ČR IAA1075201; GA ČR GA201/03/0946 Institutional research plan: CEZ:AV0Z1075907 Keywords : sample extremes * domain of attraction * normalizing constants Subject RIV: BA - General Mathematics

  11. Large scale particle simulations in a virtual memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Million, R.; Wagner, J.S.; Tajima, T.

    1983-01-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceeds the computer core size. The required address space is automatically mapped onto slow disc memory the the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Assesses to slow memory significantly reduce the excecution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time. (orig.)

  12. Large-scale particle simulations in a virtual-memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Wagner, J.S.; Tajima, T.; Million, R.

    1982-08-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceed the computer core size. The required address space is automatically mapped onto slow disc memory by the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Accesses to slow memory significantly reduce the execution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time

  13. Measuring activity limitations in walking : Development of a hierarchical scale for patients with lower-extremity disorders who live at home

    NARCIS (Netherlands)

    Roorda, LD; Roebroeck, ME; van Tilburg, T; Molenaar, IW; Lankhorst, GJ; Bouter, LM

    2005-01-01

    Objective: To develop a hierarchical scale that measures activity limitations in walking in patients with lower-extremity disorders who live at home. Design: Cross-sectional study. Setting: Orthopedic workshops and outpatient clinics of secondary and tertiary care centers. Participants: Patients

  14. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    Science.gov (United States)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  15. HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation

    Science.gov (United States)

    Sterling, Thomas; Bergman, Larry

    2000-01-01

    Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention

  16. Spatial and temporal accuracy of asynchrony-tolerant finite difference schemes for partial differential equations at extreme scales

    Science.gov (United States)

    Kumari, Komal; Donzis, Diego

    2017-11-01

    Highly resolved computational simulations on massively parallel machines are critical in understanding the physics of a vast number of complex phenomena in nature governed by partial differential equations. Simulations at extreme levels of parallelism present many challenges with communication between processing elements (PEs) being a major bottleneck. In order to fully exploit the computational power of exascale machines one needs to devise numerical schemes that relax global synchronizations across PEs. This asynchronous computations, however, have a degrading effect on the accuracy of standard numerical schemes.We have developed asynchrony-tolerant (AT) schemes that maintain order of accuracy despite relaxed communications. We show, analytically and numerically, that these schemes retain their numerical properties with multi-step higher order temporal Runge-Kutta schemes. We also show that for a range of optimized parameters,the computation time and error for AT schemes is less than their synchronous counterpart. Stability of the AT schemes which depends upon history and random nature of delays, are also discussed. Support from NSF is gratefully acknowledged.

  17. Scaling strength distributions in quasi-brittle materials from micro-to macro-scales: A computational approach to modeling Nature-inspired structural ceramics

    International Nuclear Information System (INIS)

    Genet, Martin; Couegnat, Guillaume; Tomsia, Antoni P.; Ritchie, Robert O.

    2014-01-01

    This paper presents an approach to predict the strength distribution of quasi-brittle materials across multiple length-scales, with emphasis on Nature-inspired ceramic structures. It permits the computation of the failure probability of any structure under any mechanical load, solely based on considerations of the microstructure and its failure properties by naturally incorporating the statistical and size-dependent aspects of failure. We overcome the intrinsic limitations of single periodic unit-based approaches by computing the successive failures of the material components and associated stress redistributions on arbitrary numbers of periodic units. For large size samples, the microscopic cells are replaced by a homogenized continuum with equivalent stochastic and damaged constitutive behavior. After establishing the predictive capabilities of the method, and illustrating its potential relevance to several engineering problems, we employ it in the study of the shape and scaling of strength distributions across differing length-scales for a particular quasi-brittle system. We find that the strength distributions display a Weibull form for samples of size approaching the periodic unit; however, these distributions become closer to normal with further increase in sample size before finally reverting to a Weibull form for macroscopic sized samples. In terms of scaling, we find that the weakest link scaling applies only to microscopic, and not macroscopic scale, samples. These findings are discussed in relation to failure patterns computed at different size-scales. (authors)

  18. Extremal graph theory

    CERN Document Server

    Bollobas, Bela

    2004-01-01

    The ever-expanding field of extremal graph theory encompasses a diverse array of problem-solving methods, including applications to economics, computer science, and optimization theory. This volume, based on a series of lectures delivered to graduate students at the University of Cambridge, presents a concise yet comprehensive treatment of extremal graph theory.Unlike most graph theory treatises, this text features complete proofs for almost all of its results. Further insights into theory are provided by the numerous exercises of varying degrees of difficulty that accompany each chapter. A

  19. A computational comparison of theory and practice of scale intonation in Byzantine chant

    DEFF Research Database (Denmark)

    Panteli, Maria; Purwins, Hendrik

    2013-01-01

    Byzantine Chant performance practice is quantitatively compared to the Chrysanthine theory. The intonation of scale degrees is quantified, based on pitch class profiles. An analysis procedure is introduced that consists of the following steps: 1) Pitch class histograms are calculated via non-parametric...... kernel smoothing. 2) Histogram peaks are detected. 3) Phrase ending analysis aids the finding of the tonic to align histogram peaks. 4) The theoretical scale degrees are mapped to the practical ones. 5) A schema of statistical tests detects significant deviations of theoretical scale tuning from...... the estimated ones in performance practice. The analysis of 94 echoi shows a tendency of the singer to level theoretic particularities of the echos that stand out of the general norm in the octoechos: theoretically extremely large scale steps are diminished in performance....

  20. Reliability and validity of the Persian lower extremity functional scale (LEFS) in a heterogeneous sample of outpatients with lower limb musculoskeletal disorders.

    Science.gov (United States)

    Negahban, Hossein; Hessam, Masumeh; Tabatabaei, Saeid; Salehi, Reza; Sohani, Soheil Mansour; Mehravar, Mohammad

    2014-01-01

    The aim was to culturally translate and validate the Persian lower extremity functional scale (LEFS) in a heterogeneous sample of outpatients with lower extremity musculoskeletal disorders (n = 304). This is a prospective methodological study. After a standard forward-backward translation, psychometric properties were assessed in terms of test-retest reliability, internal consistency, construct validity, dimensionality, and ceiling or floor effects. The acceptable level of intraclass correlation coefficient >0.70 and Cronbach's alpha coefficient >0.70 was obtained for the Persian LEFS. Correlations between Persian LEFS and Short-Form 36 Health Survey (SF-36) subscales of Physical Health component (rs range = 0.38-0.78) were higher than correlations between Persian LEFS and SF-36 subscales of Mental Health component (rs range = 0.15-0.39). A corrected item--total correlation of >0.40 (Spearman's rho) was obtained for all items of the Persian LEFS. Horn's parallel analysis detected a total of two factors. No ceiling or floor effects were detected for the Persian LEFS. The Persian version of the LEFS is a reliable and valid instrument that can be used to measure functional status in Persian-speaking patients with different musculoskeletal disorders of the lower extremity. Implications for Rehabilitation The Persian lower extremity functional scale (LEFS) is a reliable, internally consistent and valid instrument, with no ceiling or floor effects, to determine functional status of heterogeneous patients with musculoskeletal disorders of the lower extremity. The Persian version of the LEFS can be used in clinical and research settings to measure function in Iranian patients with different musculoskeletal disorders of the lower extremity.

  1. Portable upper extremity robotics is as efficacious as upper extremity rehabilitative therapy: a randomized controlled pilot trial.

    Science.gov (United States)

    Page, Stephen J; Hill, Valerie; White, Susan

    2013-06-01

    To compare the efficacy of a repetitive task-specific practice regimen integrating a portable, electromyography-controlled brace called the 'Myomo' versus usual care repetitive task-specific practice in subjects with chronic, moderate upper extremity impairment. Sixteen subjects (7 males; mean age 57.0 ± 11.02 years; mean time post stroke 75.0 ± 87.63 months; 5 left-sided strokes) exhibiting chronic, stable, moderate upper extremity impairment. Subjects were administered repetitive task-specific practice in which they participated in valued, functional tasks using their paretic upper extremities. Both groups were supervised by a therapist and were administered therapy targeting their paretic upper extremities that was 30 minutes in duration, occurring 3 days/week for eight weeks. One group participated in repetitive task-specific practice entirely while wearing the portable robotic, while the other performed the same activity regimen manually. The upper extremity Fugl-Meyer, Canadian Occupational Performance Measure and Stroke Impact Scale were administered on two occasions before intervention and once after intervention. After intervention, groups exhibited nearly identical Fugl-Meyer score increases of ≈2.1 points; the group using robotics exhibited larger score changes on all but one of the Canadian Occupational Performance Measure and Stroke Impact Scale subscales, including a 12.5-point increase on the Stroke Impact Scale recovery subscale. Findings suggest that therapist-supervised repetitive task-specific practice integrating robotics is as efficacious as manual practice in subjects with moderate upper extremity impairment.

  2. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  3. Potential changes in the extreme climate conditions at the regional scale: from observed data to modelling approaches and towards probabilistic climate change information

    International Nuclear Information System (INIS)

    Gachon, P.; Radojevic, M.; Harding, A.; Saad, C.; Nguyen, V.T.V.

    2008-01-01

    The changes in the characteristics of extreme climate conditions are one of the most critical challenges for all ecosystems, human being and infrastructure, in the context of the on-going global climate change. However, extremes information needed for impacts studies cannot be obtained directly from coarse scale global climate models (GCMs), due mainly to their difficulties to incorporate regional scale feedbacks and processes responsible in part for the occurrence, intensity and duration of extreme events. Downscaling approaches, namely statistical and dynamical downscaling techniques (i.e. SD and RCM), have emerged as useful tools to develop high resolution climate change information, in particular for extremes, as those are theoretically more capable to take into account regional/local forcings and their feedbacks from large scale influences as they are driven with GCM synoptic variables. Nevertheless, in spite of the potential added values from downscaling methods (statistical and dynamical), a rigorous assessment of these methods are needed as inherent difficulties to simulate extremes are still present. In this paper, different series of RCM and SD simulations using three different GCMs are presented and evaluated with respect to observed values over the current period and over a river basin in southern Quebec, with future ensemble runs, i.e. centered over 2050s (i.e. 2041-2070 period using the SRES A2 emission scenario). Results suggest that the downscaling performance over the baseline period significantly varies between the two downscaling techniques and over various seasons with more regular reliable simulated values with SD technique for temperature than for RCM runs, while both approaches produced quite similar temperature changes in the future from median values with more divergence for extremes. For precipitation, less accurate information is obtained compared to observed data, and with more differences among models with higher uncertainties in the

  4. Quasistatic zooming of FDTD E-field computations: the impact of down-scaling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Van de Kamer, J.B.; Kroeze, H.; De Leeuw, A.A.C.; Lagendijk, J.J.W. [Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht (Netherlands)

    2001-05-01

    Due to current computer limitations, regional hyperthermia treatment planning (HTP) is practically limited to a resolution of 1 cm, whereas a millimetre resolution is desired. Using the centimetre resolution E-vector-field distribution, computed with, for example, the finite-difference time-domain (FDTD) method and the millimetre resolution patient anatomy it is possible to obtain a millimetre resolution SAR distribution in a volume of interest (VOI) by means of quasistatic zooming. To compute the required low-resolution E-vector-field distribution, a low-resolution dielectric geometry is needed which is constructed by down-scaling the millimetre resolution dielectric geometry. In this study we have investigated which down-scaling technique results in a dielectric geometry that yields the best low-resolution E-vector-field distribution as input for quasistatic zooming. A segmented 2 mm resolution CT data set of a patient has been down-scaled to 1 cm resolution using three different techniques: 'winner-takes-all', 'volumetric averaging' and 'anisotropic volumetric averaging'. The E-vector-field distributions computed for those low-resolution dielectric geometries have been used as input for quasistatic zooming. The resulting zoomed-resolution SAR distributions were compared with a reference: the 2 mm resolution SAR distribution computed with the FDTD method. The E-vector-field distribution for both a simple phantom and the complex partial patient geometry down-scaled using 'anisotropic volumetric averaging' resulted in zoomed-resolution SAR distributions that best approximate the corresponding high-resolution SAR distribution (correlation 97, 96% and absolute averaged difference 6, 14% respectively). (author)

  5. Scale-up and optimization of biohydrogen production reactor from laboratory-scale to industrial-scale on the basis of computational fluid dynamics simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xu; Ding, Jie; Guo, Wan-Qian; Ren, Nan-Qi [State Key Laboratory of Urban Water Resource and Environment, Harbin Institute of Technology, 202 Haihe Road, Nangang District, Harbin, Heilongjiang 150090 (China)

    2010-10-15

    The objective of conducting experiments in a laboratory is to gain data that helps in designing and operating large-scale biological processes. However, the scale-up and design of industrial-scale biohydrogen production reactors is still uncertain. In this paper, an established and proven Eulerian-Eulerian computational fluid dynamics (CFD) model was employed to perform hydrodynamics assessments of an industrial-scale continuous stirred-tank reactor (CSTR) for biohydrogen production. The merits of the laboratory-scale CSTR and industrial-scale CSTR were compared and analyzed on the basis of CFD simulation. The outcomes demonstrated that there are many parameters that need to be optimized in the industrial-scale reactor, such as the velocity field and stagnation zone. According to the results of hydrodynamics evaluation, the structure of industrial-scale CSTR was optimized and the results are positive in terms of advancing the industrialization of biohydrogen production. (author)

  6. Large-scale visualization system for grid environment

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2007-01-01

    Center for Computational Science and E-systems of Japan Atomic Energy Agency (CCSE/JAEA) has been conducting R and Ds of distributed computing (grid computing) environments: Seamless Thinking Aid (STA), Information Technology Based Laboratory (ITBL) and Atomic Energy Grid InfraStructure (AEGIS). In these R and Ds, we have developed the visualization technology suitable for the distributed computing environment. As one of the visualization tools, we have developed the Parallel Support Toolkit (PST) which can execute the visualization process parallely on a computer. Now, we improve PST to be executable simultaneously on multiple heterogeneous computers using Seamless Thinking Aid Message Passing Interface (STAMPI). STAMPI, we have developed in these R and Ds, is the MPI library executable on a heterogeneous computing environment. The improvement realizes the visualization of extremely large-scale data and enables more efficient visualization processes in a distributed computing environment. (author)

  7. Large-scale parallel genome assembler over cloud computing environment.

    Science.gov (United States)

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  8. Fitting and Analyzing Randomly Censored Geometric Extreme Exponential Distribution

    Directory of Open Access Journals (Sweden)

    Muhammad Yameen Danish

    2016-06-01

    Full Text Available The paper presents the Bayesian analysis of two-parameter geometric extreme exponential distribution with randomly censored data. The continuous conjugate prior of the scale and shape parameters of the model does not exist while computing the Bayes estimates, it is assumed that the scale and shape parameters have independent gamma priors. It is seen that the closed-form expressions for the Bayes estimators are not possible; we suggest the Lindley’s approximation to obtain the Bayes estimates. However, the Bayesian credible intervals cannot be constructed while using this method, we propose Gibbs sampling to obtain the Bayes estimates and also to construct the Bayesian credible intervals. Monte Carlo simulation study is carried out to observe the behavior of the Bayes estimators and also to compare with the maximum likelihood estimators. One real data analysis is performed for illustration.

  9. Performing three-dimensional neutral particle transport calculations on tera scale computers

    International Nuclear Information System (INIS)

    Woodward, C.S.; Brown, P.N.; Chang, B.; Dorr, M.R.; Hanebutte, U.R.

    1999-01-01

    A scalable, parallel code system to perform neutral particle transport calculations in three dimensions is presented. To utilize the hyper-cluster architecture of emerging tera scale computers, the parallel code successfully combines the MPI message passing and paradigms. The code's capabilities are demonstrated by a shielding calculation containing over 14 billion unknowns. This calculation was accomplished on the IBM SP ''ASCI-Blue-Pacific computer located at Lawrence Livermore National Laboratory (LLNL)

  10. Extreme Programming: Maestro Style

    Science.gov (United States)

    Norris, Jeffrey; Fox, Jason; Rabe, Kenneth; Shu, I-Hsiang; Powell, Mark

    2009-01-01

    "Extreme Programming: Maestro Style" is the name of a computer programming methodology that has evolved as a custom version of a methodology, called extreme programming that has been practiced in the software industry since the late 1990s. The name of this version reflects its origin in the work of the Maestro team at NASA's Jet Propulsion Laboratory that develops software for Mars exploration missions. Extreme programming is oriented toward agile development of software resting on values of simplicity, communication, testing, and aggressiveness. Extreme programming involves use of methods of rapidly building and disseminating institutional knowledge among members of a computer-programming team to give all the members a shared view that matches the view of the customers for whom the software system is to be developed. Extreme programming includes frequent planning by programmers in collaboration with customers, continually examining and rewriting code in striving for the simplest workable software designs, a system metaphor (basically, an abstraction of the system that provides easy-to-remember software-naming conventions and insight into the architecture of the system), programmers working in pairs, adherence to a set of coding standards, collaboration of customers and programmers, frequent verbal communication, frequent releases of software in small increments of development, repeated testing of the developmental software by both programmers and customers, and continuous interaction between the team and the customers. The environment in which the Maestro team works requires the team to quickly adapt to changing needs of its customers. In addition, the team cannot afford to accept unnecessary development risk. Extreme programming enables the Maestro team to remain agile and provide high-quality software and service to its customers. However, several factors in the Maestro environment have made it necessary to modify some of the conventional extreme

  11. Consolidation of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Cordeiro, Cristovao; Di Girolamo, Alessandro; Hover, John; Kouba, Tomas; Love, Peter; Mcnab, Andrew; Schovancova, Jaroslava; Sobie, Randall

    2016-01-01

    Throughout the first year of LHC Run 2, ATLAS Cloud Computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS Cloud Computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vac resources, streamlined usage of the High Level Trigger cloud for simulation and reconstruction, extreme scaling on Amazon EC2, and procurement of commercial cloud capacity in Europe. Building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems. ...

  12. Extreme Temperature Regimes during the Cool Season and their Associated Large-Scale Circulations

    Science.gov (United States)

    Xie, Z.

    2015-12-01

    In the cool season (November-March), extreme temperature events (ETEs) always hit the continental United States (US) and provide significant societal impacts. According to the anomalous amplitudes of the surface air temperature (SAT), there are two typical types of ETEs, e.g. cold waves (CWs) and warm waves (WWs). This study used cluster analysis to categorize both CWs and WWs into four distinct regimes respectively and investigated their associated large-scale circulations on intra-seasonal time scale. Most of the CW regimes have large areal impact over the continental US. However, the distribution of cold SAT anomalies varies apparently in four regimes. In the sea level, the four CW regimes are characterized by anomalous high pressure over North America (near and to west of cold anomaly) with different extension and orientation. As a result, anomalous northerlies along east flank of anomalous high pressure convey cold air into the continental US. To the middle troposphere, the leading two groups feature large-scale and zonally-elongated circulation anomaly pattern, while the other two regimes exhibit synoptic wavetrain pattern with meridionally elongated features. As for the WW regimes, there are some patterns symmetry and anti-symmetry with respect to CW regimes. The WW regimes are characterized by anomalous low pressure and southerlies wind over North America. The first and fourth groups are affected by remote forcing emanating from North Pacific, while the others appear mainly locally forced.

  13. Differential Juvenile Hormone Variations in Scale Insect Extreme Sexual Dimorphism.

    Directory of Open Access Journals (Sweden)

    Isabelle Mifom Vea

    Full Text Available Scale insects have evolved extreme sexual dimorphism, as demonstrated by sedentary juvenile-like females and ephemeral winged males. This dimorphism is established during the post-embryonic development; however, the underlying regulatory mechanisms have not yet been examined. We herein assessed the role of juvenile hormone (JH on the diverging developmental pathways occurring in the male and female Japanese mealybug Planococcus kraunhiae (Kuwana. We provide, for the first time, detailed gene expression profiles related to JH signaling in scale insects. Prior to adult emergence, the transcript levels of JH acid O-methyltransferase, encoding a rate-limiting enzyme in JH biosynthesis, were higher in males than in females, suggesting that JH levels are higher in males. Furthermore, male quiescent pupal-like stages were associated with higher transcript levels of the JH receptor gene, Methoprene-tolerant and its co-activator taiman, as well as the JH early-response genes, Krüppel homolog 1 and broad. The exposure of male juveniles to an ectopic JH mimic prolonged the expression of Krüppel homolog 1 and broad, and delayed adult emergence by producing a supernumeral pupal stage. We propose that male wing development is first induced by up-regulated JH signaling compared to female expression pattern, but a decrease at the end of the prepupal stage is necessary for adult emergence, as evidenced by the JH mimic treatments. Furthermore, wing development seems linked to JH titers as JHM treatments on the pupal stage led to wing deformation. The female pedomorphic appearance was not reflected by the maintenance of high levels of JH. The results in this study suggest that differential variations in JH signaling may be responsible for sex-specific and radically different modes of metamorphosis.

  14. Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    Energy Technology Data Exchange (ETDEWEB)

    Ghanem, Roger [Univ. of Southern California, Los Angeles, CA (United States)

    2017-04-18

    QUEST was a SciDAC Institute comprising Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, the Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University. The mission of QUEST is to: (1) develop a broad class of uncertainty quantification (UQ) methods/tools, and (2) provide UQ expertise and software to other SciDAC projects, thereby enabling/guiding their UQ activities. The USC effort centered on the development of reduced models and efficient algorithms for implementing various components of the UQ pipeline. USC personnel were responsible for the development of adaptive bases, adaptive quadrature, and reduced models to be used in estimation and inference.

  15. CT crown for on-machine scale calibration in Computed Tomography

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; De Chiffre, Leonardo

    2016-01-01

    A novel artefact for on-machine calibration of the scale in 3D X-ray Computed Tomography (CT) is presented. The artefact comprises an invar disc on which several reference ruby spheres are positioned at different heights using carbon fibre rods. The artefact is positioned and scanned together...

  16. Computational Challenges in the Analysis of Petrophysics Using Microtomography and Upscaling

    Science.gov (United States)

    Liu, J.; Pereira, G.; Freij-Ayoub, R.; Regenauer-Lieb, K.

    2014-12-01

    Microtomography provides detailed 3D internal structures of rocks in micro- to tens of nano-meter resolution and is quickly turning into a new technology for studying petrophysical properties of materials. An important step is the upscaling of these properties as micron or sub-micron resolution can only be done on the sample-scale of millimeters or even less than a millimeter. We present here a recently developed computational workflow for the analysis of microstructures including the upscaling of material properties. Computations of properties are first performed using conventional material science simulations at micro to nano-scale. The subsequent upscaling of these properties is done by a novel renormalization procedure based on percolation theory. We have tested the workflow using different rock samples, biological and food science materials. We have also applied the technique on high-resolution time-lapse synchrotron CT scans. In this contribution we focus on the computational challenges that arise from the big data problem of analyzing petrophysical properties and its subsequent upscaling. We discuss the following challenges: 1) Characterization of microtomography for extremely large data sets - our current capability. 2) Computational fluid dynamics simulations at pore-scale for permeability estimation - methods, computing cost and accuracy. 3) Solid mechanical computations at pore-scale for estimating elasto-plastic properties - computational stability, cost, and efficiency. 4) Extracting critical exponents from derivative models for scaling laws - models, finite element meshing, and accuracy. Significant progress in each of these challenges is necessary to transform microtomography from the current research problem into a robust computational big data tool for multi-scale scientific and engineering problems.

  17. 3D artefact for concurrent scale calibration in Computed Tomography

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; De Chiffre, Leonardo

    2016-01-01

    A novel artefact for calibration of the scale in 3D X-ray Computed Tomography (CT) is presented. The artefact comprises a carbon fibre tubular structure on which a number of reference ruby spheres are glued. The artefact is positioned and scanned together with the workpiece inside the CT scanner...

  18. Large-scale theoretical calculations in molecular science - design of a large computer system for molecular science and necessary conditions for future computers

    Energy Technology Data Exchange (ETDEWEB)

    Kashiwagi, H [Institute for Molecular Science, Okazaki, Aichi (Japan)

    1982-06-01

    A large computer system was designed and established for molecular science under the leadership of molecular scientists. Features of the computer system are an automated operation system and an open self-service system. Large-scale theoretical calculations have been performed to solve many problems in molecular science, using the computer system. Necessary conditions for future computers are discussed on the basis of this experience.

  19. Large-scale theoretical calculations in molecular science - design of a large computer system for molecular science and necessary conditions for future computers

    International Nuclear Information System (INIS)

    Kashiwagi, H.

    1982-01-01

    A large computer system was designed and established for molecular science under the leadership of molecular scientists. Features of the computer system are an automated operation system and an open self-service system. Large-scale theoretical calculations have been performed to solve many problems in molecular science, using the computer system. Necessary conditions for future computers are discussed on the basis of this experience. (orig.)

  20. [The research on bidirectional reflectance computer simulation of forest canopy at pixel scale].

    Science.gov (United States)

    Song, Jin-Ling; Wang, Jin-Di; Shuai, Yan-Min; Xiao, Zhi-Qiang

    2009-08-01

    Computer simulation is based on computer graphics to generate the realistic 3D structure scene of vegetation, and to simulate the canopy regime using radiosity method. In the present paper, the authors expand the computer simulation model to simulate forest canopy bidirectional reflectance at pixel scale. But usually, the trees are complex structures, which are tall and have many branches. So there is almost a need for hundreds of thousands or even millions of facets to built up the realistic structure scene for the forest It is difficult for the radiosity method to compute so many facets. In order to make the radiosity method to simulate the forest scene at pixel scale, in the authors' research, the authors proposed one idea to simplify the structure of forest crowns, and abstract the crowns to ellipsoids. And based on the optical characteristics of the tree component and the characteristics of the internal energy transmission of photon in real crown, the authors valued the optical characteristics of ellipsoid surface facets. In the computer simulation of the forest, with the idea of geometrical optics model, the gap model is considered to get the forest canopy bidirectional reflectance at pixel scale. Comparing the computer simulation results with the GOMS model, and Multi-angle Imaging SpectroRadiometer (MISR) multi-angle remote sensing data, the simulation results are in agreement with the GOMS simulation result and MISR BRF. But there are also some problems to be solved. So the authors can conclude that the study has important value for the application of multi-angle remote sensing and the inversion of vegetation canopy structure parameters.

  1. Simulation of large scale air detritiation operations by computer modeling and bench-scale experimentation

    International Nuclear Information System (INIS)

    Clemmer, R.G.; Land, R.H.; Maroni, V.A.; Mintz, J.M.

    1978-01-01

    Although some experience has been gained in the design and construction of 0.5 to 5 m 3 /s air-detritiation systems, little information is available on the performance of these systems under realistic conditions. Recently completed studies at ANL have attempted to provide some perspective on this subject. A time-dependent computer model was developed to study the effects of various reaction and soaking mechanisms that could occur in a typically-sized fusion reactor building (approximately 10 5 m 3 ) following a range of tritium releases (2 to 200 g). In parallel with the computer study, a small (approximately 50 liter) test chamber was set up to investigate cleanup characteristics under conditions which could also be simulated with the computer code. Whereas results of computer analyses indicated that only approximately 10 -3 percent of the tritium released to an ambient enclosure should be converted to tritiated water, the bench-scale experiments gave evidence of conversions to water greater than 1%. Furthermore, although the amounts (both calculated and observed) of soaked-in tritium are usually only a very small fraction of the total tritium release, the soaked tritium is significant, in that its continuous return to the enclosure extends the cleanup time beyond the predicted value in the absence of any soaking mechanisms

  2. Measuring Students' Writing Ability on a Computer-Analytic Developmental Scale: An Exploratory Validity Study

    Science.gov (United States)

    Burdick, Hal; Swartz, Carl W.; Stenner, A. Jackson; Fitzgerald, Jill; Burdick, Don; Hanlon, Sean T.

    2013-01-01

    The purpose of the study was to explore the validity of a novel computer-analytic developmental scale, the Writing Ability Developmental Scale. On the whole, collective results supported the validity of the scale. It was sensitive to writing ability differences across grades and sensitive to within-grade variability as compared to human-rated…

  3. Automatic computation of moment magnitudes for small earthquakes and the scaling of local to moment magnitude

    Science.gov (United States)

    Edwards, Benjamin; Allmann, Bettina; Fäh, Donat; Clinton, John

    2010-10-01

    Moment magnitudes (MW) are computed for small and moderate earthquakes using a spectral fitting method. 40 of the resulting values are compared with those from broadband moment tensor solutions and found to match with negligible offset and scatter for available MW values of between 2.8 and 5.0. Using the presented method, MW are computed for 679 earthquakes in Switzerland with a minimum ML = 1.3. A combined bootstrap and orthogonal L1 minimization is then used to produce a scaling relation between ML and MW. The scaling relation has a polynomial form and is shown to reduce the dependence of the predicted MW residual on magnitude relative to an existing linear scaling relation. The computation of MW using the presented spectral technique is fully automated at the Swiss Seismological Service, providing real-time solutions within 10 minutes of an event through a web-based XML database. The scaling between ML and MW is explored using synthetic data computed with a stochastic simulation method. It is shown that the scaling relation can be explained by the interaction of attenuation, the stress-drop and the Wood-Anderson filter. For instance, it is shown that the stress-drop controls the saturation of the ML scale, with low-stress drops (e.g. 0.1-1.0 MPa) leading to saturation at magnitudes as low as ML = 4.

  4. Rotational profile of the lower extremity in achondroplasia: computed tomographic examination of 25 patients

    Energy Technology Data Exchange (ETDEWEB)

    Song, Hae-Ryong; Suh, Seung-Woo [Korea University Guro Hospital, Department of Orthopaedic Surgery, Rare Diseases Institute, Seoul (Korea); Choonia, Abi-Turab [Laud Clinic, Department of Orthopaedic Surgery, Mumbai (India); Hong, Suk Joo; Cha, In Ho [Korea University Guro Hospital, Department of Radiology, Seoul (Korea); Lee, Seok-Hyun [Dongguk University Ilsan Buddist Hospital, Department of Orthopaedic Surgery, Goyang (Korea); Park, Jong-Tae [Korea University Ansan Hospital, Department of Occupational and Enviornmental Medicine, Ansan (Korea)

    2006-12-15

    To evaluate lower-extremity rotational abnormalities in subjects with achondroplasia using computed tomography (CT) scans. CT scans were performed in 25 subjects with achondroplasia (13 skeletally immature, mean age 8.7 years; 12 skeletally mature, mean age 17.6 years). In a total of 50 bilateral limbs, CT images were used to measure the angles of acetabular anteversion, femoral anteversion, and tibial external torion. Measurement was performed by three examiners and then repeated by one examiner. Inter- and intraobserver agreements were analyzed, and results were compared with previously reported normal values. Mean values for skeletally immature and skeletally mature subjects were 13.6{+-}7.5 and 21.5{+-}6.4 respectively for acetabular anteversion, 27.1{+-}20.8 and 30.5{+-}20.1 for femoral torsion, and 21.6{+-}10.6 and 22.5{+-}10.8 for tibial torsion. Intra- and interobserver agreements were good to excellent. Acetabular anteversion and femoral anteversion in skeletally mature subjects were greater than normal values in previous studies. Both skeletally immature and mature subjects with achondroplasia had decreased tibial torsion compared to normal skeletally immature and mature subjects. Lower-extremity rotational abnormalities in subjects with achondroplasia include decreased tibial external torsion in both skeletally immature and mature subjects, as well as increased femoral and acetabular anteversion in skeletally mature subjects. (orig.)

  5. Rotational profile of the lower extremity in achondroplasia: computed tomographic examination of 25 patients

    International Nuclear Information System (INIS)

    Song, Hae-Ryong; Suh, Seung-Woo; Choonia, Abi-Turab; Hong, Suk Joo; Cha, In Ho; Lee, Seok-Hyun; Park, Jong-Tae

    2006-01-01

    To evaluate lower-extremity rotational abnormalities in subjects with achondroplasia using computed tomography (CT) scans. CT scans were performed in 25 subjects with achondroplasia (13 skeletally immature, mean age 8.7 years; 12 skeletally mature, mean age 17.6 years). In a total of 50 bilateral limbs, CT images were used to measure the angles of acetabular anteversion, femoral anteversion, and tibial external torion. Measurement was performed by three examiners and then repeated by one examiner. Inter- and intraobserver agreements were analyzed, and results were compared with previously reported normal values. Mean values for skeletally immature and skeletally mature subjects were 13.6±7.5 and 21.5±6.4 respectively for acetabular anteversion, 27.1±20.8 and 30.5±20.1 for femoral torsion, and 21.6±10.6 and 22.5±10.8 for tibial torsion. Intra- and interobserver agreements were good to excellent. Acetabular anteversion and femoral anteversion in skeletally mature subjects were greater than normal values in previous studies. Both skeletally immature and mature subjects with achondroplasia had decreased tibial torsion compared to normal skeletally immature and mature subjects. Lower-extremity rotational abnormalities in subjects with achondroplasia include decreased tibial external torsion in both skeletally immature and mature subjects, as well as increased femoral and acetabular anteversion in skeletally mature subjects. (orig.)

  6. Evaluation of seabed mapping methods for fine-scale classification of extremely shallow benthic habitats - Application to the Venice Lagoon, Italy

    Science.gov (United States)

    Montereale Gavazzi, G.; Madricardo, F.; Janowski, L.; Kruss, A.; Blondel, P.; Sigovini, M.; Foglini, F.

    2016-03-01

    Recent technological developments of multibeam echosounder systems (MBES) allow mapping of benthic habitats with unprecedented detail. MBES can now be employed in extremely shallow waters, challenging data acquisition (as these instruments were often designed for deeper waters) and data interpretation (honed on datasets with resolution sometimes orders of magnitude lower). With extremely high-resolution bathymetry and co-located backscatter data, it is now possible to map the spatial distribution of fine scale benthic habitats, even identifying the acoustic signatures of single sponges. In this context, it is necessary to understand which of the commonly used segmentation methods is best suited to account for such level of detail. At the same time, new sampling protocols for precisely geo-referenced ground truth data need to be developed to validate the benthic environmental classification. This study focuses on a dataset collected in a shallow (2-10 m deep) tidal channel of the Lagoon of Venice, Italy. Using 0.05-m and 0.2-m raster grids, we compared a range of classifications, both pixel-based and object-based approaches, including manual, Maximum Likelihood Classifier, Jenks Optimization clustering, textural analysis and Object Based Image Analysis. Through a comprehensive and accurately geo-referenced ground truth dataset, we were able to identify five different classes of the substrate composition, including sponges, mixed submerged aquatic vegetation, mixed detritic bottom (fine and coarse) and unconsolidated bare sediment. We computed estimates of accuracy (namely Overall, User, Producer Accuracies and the Kappa statistic) by cross tabulating predicted and reference instances. Overall, pixel based segmentations produced the highest accuracies and the accuracy assessment is strongly dependent on the number of classes chosen for the thematic output. Tidal channels in the Venice Lagoon are extremely important in terms of habitats and sediment distribution

  7. Cerebral methodology based computing to estimate real phenomena from large-scale nuclear simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2011-01-01

    Our final goal is to estimate real phenomena from large-scale nuclear simulations by using computing processes. Large-scale simulations mean that they include scale variety and physical complexity so that corresponding experiments and/or theories do not exist. In nuclear field, it is indispensable to estimate real phenomena from simulations in order to improve the safety and security of nuclear power plants. Here, the analysis of uncertainty included in simulations is needed to reveal sensitivity of uncertainty due to randomness, to reduce the uncertainty due to lack of knowledge and to lead a degree of certainty by verification and validation (V and V) and uncertainty quantification (UQ) processes. To realize this, we propose 'Cerebral Methodology based Computing (CMC)' as computing processes with deductive and inductive approaches by referring human reasoning processes. Our idea is to execute deductive and inductive simulations contrasted with deductive and inductive approaches. We have established its prototype system and applied it to a thermal displacement analysis of a nuclear power plant. The result shows that our idea is effective to reduce the uncertainty and to get the degree of certainty. (author)

  8. Extremal dynamics in random replicator ecosystems

    Energy Technology Data Exchange (ETDEWEB)

    Kärenlampi, Petri P., E-mail: petri.karenlampi@uef.fi

    2015-10-02

    The seminal numerical experiment by Bak and Sneppen (BS) is repeated, along with computations with replicator models, including a greater amount of features. Both types of models do self-organize, and do obey power-law scaling for the size distribution of activity cycles. However species extinction within the replicator models interferes with the BS self-organized critical (SOC) activity. Speciation–extinction dynamics ruins any stationary state which might contain a steady size distribution of activity cycles. The BS-type activity appears as a dissimilar phenomenon in comparison to speciation–extinction dynamics in the replicator system. No criticality is found from the speciation–extinction dynamics. Neither are speciations and extinctions in real biological macroevolution known to contain any diverging distributions, or self-organization towards any critical state. Consequently, biological macroevolution probably is not a self-organized critical phenomenon. - Highlights: • Extremal Dynamics organizes random replicator ecosystems to two phases in fitness space. • Replicator systems show power-law scaling of activity. • Species extinction interferes with Bak–Sneppen type mutation activity. • Speciation–extinction dynamics does not show any critical phase transition. • Biological macroevolution probably is not a self-organized critical phenomenon.

  9. The diagnostic value of time-resolved MR angiography with Gadobutrol at 3 T for preoperative evaluation of lower extremity tumors: Comparison with computed tomography angiography

    International Nuclear Information System (INIS)

    Wu, Gang; Jin, Teng; Li, Ting; Li, Xiaoming

    2016-01-01

    To evaluate the diagnostic value of time resolved magnetic resonance angiography with interleaved stochastic trajectory (TWIST) using Gadobutrol for preoperative evaluation of lower extremity tumors. This prospective study was approved by the local Institutional Review Board. 50 consecutive patients (31 men, 19 women, age range 18–80 years, average age 42.7 years) with lower extremity tumors underwent TWIST and computed tomography angiography (CTA). Image quality of TWIST and CTA were evaluated by two radiologists according to a 4-point scale. The degree of arterial stenosis caused by tumor was assessed using TWSIT and CTA separately, and the intra-modality agreement was determined using a kappa test. The number of feeding arteries identified by TWIST was compared with that by CTA using Wilcoxon signed rank test. The ability to identify arterio-venous fistulae (AVF) were compared using a chi-square test. Image quality of TWIST and CTA were rated as 3.88 ± 0.37 and 3.97 ± 0.16, without statistically significant difference (P = 0.135). Intra-modality agreement was excellent for the assessment of arterial stenosis (kappa = 0.806 ± 0.073 for Reader 1, kappa = 0.805 ± 0.073 for Reader 2). Readers identified AVF with TWIST in 27 of 50 cases, and identified AVF with CTA in 14 of 50 (P < 0.001). Mean feeding arteries identified with TWIST was significantly more than that with CTA (2.08 ± 1.72 vs 1.62 ± 1.52, P = 0.02). TWIST is a reliable imaging modality for the assessment of lower extremity tumors. TWIST is comparable to CTA for the identification of AVF and feeding arteries

  10. Automatic detection of ischemic stroke based on scaling exponent electroencephalogram using extreme learning machine

    Science.gov (United States)

    Adhi, H. A.; Wijaya, S. K.; Prawito; Badri, C.; Rezal, M.

    2017-03-01

    Stroke is one of cerebrovascular diseases caused by the obstruction of blood flow to the brain. Stroke becomes the leading cause of death in Indonesia and the second in the world. Stroke also causes of the disability. Ischemic stroke accounts for most of all stroke cases. Obstruction of blood flow can cause tissue damage which results the electrical changes in the brain that can be observed through the electroencephalogram (EEG). In this study, we presented the results of automatic detection of ischemic stroke and normal subjects based on the scaling exponent EEG obtained through detrended fluctuation analysis (DFA) using extreme learning machine (ELM) as the classifier. The signal processing was performed with 18 channels of EEG in the range of 0-30 Hz. Scaling exponents of the subjects were used as the input for ELM to classify the ischemic stroke. The performance of detection was observed by the value of accuracy, sensitivity and specificity. The result showed, performance of the proposed method to classify the ischemic stroke was 84 % for accuracy, 82 % for sensitivity and 87 % for specificity with 120 hidden neurons and sine as the activation function of ELM.

  11. Assessing future climatic changes of rainfall extremes at small spatio-temporal scales

    DEFF Research Database (Denmark)

    Gregersen, Ida Bülow; Sørup, Hjalte Jomo Danielsen; Madsen, Henrik

    2013-01-01

    Climate change is expected to influence the occurrence and magnitude of rainfall extremes and hence the flood risks in cities. Major impacts of an increased pluvial flood risk are expected to occur at hourly and sub-hourly resolutions. This makes convective storms the dominant rainfall type...... in relation to urban flooding. The present study focuses on high-resolution regional climate model (RCM) skill in simulating sub-daily rainfall extremes. Temporal and spatial characteristics of output from three different RCM simulations with 25 km resolution are compared to point rainfall extremes estimated...... from observed data. The applied RCM data sets represent two different models and two different types of forcing. Temporal changes in observed extreme point rainfall are partly reproduced by the RCM RACMO when forced by ERA40 re-analysis data. Two ECHAM forced simulations show similar increases...

  12. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2013-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the 'A-Train' platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (MERRA), stratify the comparisons using a classification of the 'cloud scenes' from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically 'sharded' by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will

  13. The Need for Optical Means as an Alternative for Electronic Computing

    Science.gov (United States)

    Adbeldayem, Hossin; Frazier, Donald; Witherow, William; Paley, Steve; Penn, Benjamin; Bank, Curtis; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    An increasing demand for faster computers is rapidly growing to encounter the fast growing rate of Internet, space communication, and robotic industry. Unfortunately, the Very Large Scale Integration technology is approaching its fundamental limits beyond which the device will be unreliable. Optical interconnections and optical integrated circuits are strongly believed to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by conventional electronics. This paper demonstrates two ultra-fast, all-optical logic gates and a high-density storage medium, which are essential components in building the future optical computer.

  14. A review of parallel computing for large-scale remote sensing image mosaicking

    OpenAIRE

    Chen, Lajiao; Ma, Yan; Liu, Peng; Wei, Jingbo; Jie, Wei; He, Jijun

    2015-01-01

    Interest in image mosaicking has been spurred by a wide variety of research and management needs. However, for large-scale applications, remote sensing image mosaicking usually requires significant computational capabilities. Several studies have attempted to apply parallel computing to improve image mosaicking algorithms and to speed up calculation process. The state of the art of this field has not yet been summarized, which is, however, essential for a better understanding and for further ...

  15. Multi-Agent System Supporting Automated Large-Scale Photometric Computations

    Directory of Open Access Journals (Sweden)

    Adam Sȩdziwy

    2016-02-01

    Full Text Available The technologies related to green energy, smart cities and similar areas being dynamically developed in recent years, face frequently problems of a computational nature rather than a technological one. The example is the ability of accurately predicting the weather conditions for PV farms or wind turbines. Another group of issues is related to the complexity of the computations required to obtain an optimal setup of a solution being designed. In this article, we present the case representing the latter group of problems, namely designing large-scale power-saving lighting installations. The term “large-scale” refers to an entire city area, containing tens of thousands of luminaires. Although a simple power reduction for a single street, giving limited savings, is relatively easy, it becomes infeasible for tasks covering thousands of luminaires described by precise coordinates (instead of simplified layouts. To overcome this critical issue, we propose introducing a formal representation of a computing problem and applying a multi-agent system to perform design-related computations in parallel. The important measure introduced in the article indicating optimization progress is entropy. It also allows for terminating optimization when the solution is satisfying. The article contains the results of real-life calculations being made with the help of the presented approach.

  16. Scaling and universality of ac conduction in disordered solids

    DEFF Research Database (Denmark)

    Schrøder, Thomas; Dyre, Jeppe

    2000-01-01

    Recent scaling results for the ac conductivity of ionic glasses by Roling et al. [Phys. Rev. Lett. 78, 2160 (1997)] and Sidebottom [Phys. Rev. Lett. 82, 3653 (1999)] are discussed. We prove that Sidebottom's version of scaling is completely general. A new approximation to the universal ac conduct...... conductivity arising in the extreme disorder limit of the symmetric hopping model, the "diffusion cluster approximation," is presented and compared to computer simulations and experiments.......Recent scaling results for the ac conductivity of ionic glasses by Roling et al. [Phys. Rev. Lett. 78, 2160 (1997)] and Sidebottom [Phys. Rev. Lett. 82, 3653 (1999)] are discussed. We prove that Sidebottom's version of scaling is completely general. A new approximation to the universal ac...

  17. Tuneable resolution as a systems biology approach for multi-scale, multi-compartment computational models.

    Science.gov (United States)

    Kirschner, Denise E; Hunt, C Anthony; Marino, Simeone; Fallahi-Sichani, Mohammad; Linderman, Jennifer J

    2014-01-01

    The use of multi-scale mathematical and computational models to study complex biological processes is becoming increasingly productive. Multi-scale models span a range of spatial and/or temporal scales and can encompass multi-compartment (e.g., multi-organ) models. Modeling advances are enabling virtual experiments to explore and answer questions that are problematic to address in the wet-lab. Wet-lab experimental technologies now allow scientists to observe, measure, record, and analyze experiments focusing on different system aspects at a variety of biological scales. We need the technical ability to mirror that same flexibility in virtual experiments using multi-scale models. Here we present a new approach, tuneable resolution, which can begin providing that flexibility. Tuneable resolution involves fine- or coarse-graining existing multi-scale models at the user's discretion, allowing adjustment of the level of resolution specific to a question, an experiment, or a scale of interest. Tuneable resolution expands options for revising and validating mechanistic multi-scale models, can extend the longevity of multi-scale models, and may increase computational efficiency. The tuneable resolution approach can be applied to many model types, including differential equation, agent-based, and hybrid models. We demonstrate our tuneable resolution ideas with examples relevant to infectious disease modeling, illustrating key principles at work. © 2014 The Authors. WIREs Systems Biology and Medicine published by Wiley Periodicals, Inc.

  18. Towards scaling up trapped ion quantum information processing

    International Nuclear Information System (INIS)

    Leibfried, D.; Wineland, D. J.; Blakestad, R. B.; Bollinger, J. J.; Britton, J.; Chiaverini, J.; Epstein, R. J.; Itano, W. M.; Jost, J. D.; Knill, E.; Langer, C.; Ozeri, R.; Reichle, R.; Seidelin, S.; Shiga, N.; Wesenberg, J. H.

    2007-01-01

    Recent theoretical advances have identified several computational algorithms that can be implemented utilizing quantum information processing (QIP), which gives an exponential speedup over the corresponding (known) algorithms on conventional computers. QIP makes use of the counter-intuitive properties of quantum mechanics, such as entanglement and the superposition principle. Unfortunately it has so far been impossible to build a practical QIP system that outperforms conventional computers. Atomic ions confined in an array of interconnected traps represent a potentially scalable approach to QIP. All basic requirements have been experimentally demonstrated in one and two qubit experiments. The remaining task is to scale the system to many qubits while minimizing and correcting errors in the system. While this requires extremely challenging technological improvements, no fundamental roadblocks are currently foreseen.

  19. Mathematics and Computer Science | Argonne National Laboratory

    Science.gov (United States)

    Extreme Computing Data-Intensive Science Applied Mathematics Science & Engineering Applications Software Extreme Computing Data-Intensive Science Applied Mathematics Science & Engineering Opportunities For Employees Staff Directory Argonne National Laboratory Mathematics and Computer Science Tools

  20. Statistical Model of Extreme Shear

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Hansen, Kurt Schaldemose

    2004-01-01

    In order to continue cost-optimisation of modern large wind turbines, it is important to continously increase the knowledge on wind field parameters relevant to design loads. This paper presents a general statistical model that offers site-specific prediction of the probability density function...... by a model that, on a statistically consistent basis, describe the most likely spatial shape of an extreme wind shear event. Predictions from the model have been compared with results from an extreme value data analysis, based on a large number of high-sampled full-scale time series measurements...... are consistent, given the inevitabel uncertainties associated with model as well as with the extreme value data analysis. Keywords: Statistical model, extreme wind conditions, statistical analysis, turbulence, wind loading, statistical analysis, turbulence, wind loading, wind shear, wind turbines....

  1. Coupled large-eddy simulation and morphodynamics of a large-scale river under extreme flood conditions

    Science.gov (United States)

    Khosronejad, Ali; Sotiropoulos, Fotis; Stony Brook University Team

    2016-11-01

    We present a coupled flow and morphodynamic simulations of extreme flooding in 3 km long and 300 m wide reach of the Mississippi River in Minnesota, which includes three islands and hydraulic structures. We employ the large-eddy simulation (LES) and bed-morphodynamic modules of the VFS-Geophysics model to investigate the flow and bed evolution of the river during a 500 year flood. The coupling of the two modules is carried out via a fluid-structure interaction approach using a nested domain approach to enhance the resolution of bridge scour predictions. The geometrical data of the river, islands and structures are obtained from LiDAR, sub-aqueous sonar and in-situ surveying to construct a digital map of the river bathymetry. Our simulation results for the bed evolution of the river reveal complex sediment dynamics near the hydraulic structures. The numerically captured scour depth near some of the structures reach a maximum of about 10 m. The data-driven simulation strategy we present in this work exemplifies a practical simulation-based-engineering-approach to investigate the resilience of infrastructures to extreme flood events in intricate field-scale riverine systems. This work was funded by a Grant from Minnesota Dept. of Transportation.

  2. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files

  3. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.

  4. Multi Scale Finite Element Analyses By Using SEM-EBSD Crystallographic Modeling and Parallel Computing

    International Nuclear Information System (INIS)

    Nakamachi, Eiji

    2005-01-01

    A crystallographic homogenization procedure is introduced to the conventional static-explicit and dynamic-explicit finite element formulation to develop a multi scale - double scale - analysis code to predict the plastic strain induced texture evolution, yield loci and formability of sheet metal. The double-scale structure consists of a crystal aggregation - micro-structure - and a macroscopic elastic plastic continuum. At first, we measure crystal morphologies by using SEM-EBSD apparatus, and define a unit cell of micro structure, which satisfy the periodicity condition in the real scale of polycrystal. Next, this crystallographic homogenization FE code is applied to 3N pure-iron and 'Benchmark' aluminum A6022 polycrystal sheets. It reveals that the initial crystal orientation distribution - the texture - affects very much to a plastic strain induced texture and anisotropic hardening evolutions and sheet deformation. Since, the multi-scale finite element analysis requires a large computation time, a parallel computing technique by using PC cluster is developed for a quick calculation. In this parallelization scheme, a dynamic workload balancing technique is introduced for quick and efficient calculations

  5. Extreme rainfall, vulnerability and risk: a continental-scale assessment for South America

    Science.gov (United States)

    Vorosmarty, Charles J.; de Guenni, Lelys Bravo; Wollheim, Wilfred M.; Pellerin, Brian A.; Bjerklie, David M.; Cardoso, Manoel; D'Almeida, Cassiano; Colon, Lilybeth

    2013-01-01

    Extreme weather continues to preoccupy society as a formidable public safety concern bearing huge economic costs. While attention has focused on global climate change and how it could intensify key elements of the water cycle such as precipitation and river discharge, it is the conjunction of geophysical and socioeconomic forces that shapes human sensitivity and risks to weather extremes. We demonstrate here the use of high-resolution geophysical and population datasets together with documentary reports of rainfall-induced damage across South America over a multi-decadal, retrospective time domain (1960–2000). We define and map extreme precipitation hazard, exposure, affectedpopulations, vulnerability and risk, and use these variables to analyse the impact of floods as a water security issue. Geospatial experiments uncover major sources of risk from natural climate variability and population growth, with change in climate extremes bearing a minor role. While rural populations display greatest relative sensitivity to extreme rainfall, urban settings show the highest rates of increasing risk. In the coming decades, rapid urbanization will make South American cities the focal point of future climate threats but also an opportunity for reducing vulnerability, protecting lives and sustaining economic development through both traditional and ecosystem-based disaster risk management systems.

  6. Extreme rainfall, vulnerability and risk: a continental-scale assessment for South America.

    Science.gov (United States)

    Vörösmarty, Charles J; Bravo de Guenni, Lelys; Wollheim, Wilfred M; Pellerin, Brian; Bjerklie, David; Cardoso, Manoel; D'Almeida, Cassiano; Green, Pamela; Colon, Lilybeth

    2013-11-13

    Extreme weather continues to preoccupy society as a formidable public safety concern bearing huge economic costs. While attention has focused on global climate change and how it could intensify key elements of the water cycle such as precipitation and river discharge, it is the conjunction of geophysical and socioeconomic forces that shapes human sensitivity and risks to weather extremes. We demonstrate here the use of high-resolution geophysical and population datasets together with documentary reports of rainfall-induced damage across South America over a multi-decadal, retrospective time domain (1960-2000). We define and map extreme precipitation hazard, exposure, affectedpopulations, vulnerability and risk, and use these variables to analyse the impact of floods as a water security issue. Geospatial experiments uncover major sources of risk from natural climate variability and population growth, with change in climate extremes bearing a minor role. While rural populations display greatest relative sensitivity to extreme rainfall, urban settings show the highest rates of increasing risk. In the coming decades, rapid urbanization will make South American cities the focal point of future climate threats but also an opportunity for reducing vulnerability, protecting lives and sustaining economic development through both traditional and ecosystem-based disaster risk management systems.

  7. Advancing nanoelectronic device modeling through peta-scale computing and deployment on nanoHUB

    International Nuclear Information System (INIS)

    Haley, Benjamin P; Luisier, Mathieu; Klimeck, Gerhard; Lee, Sunhee; Ryu, Hoon; Bae, Hansang; Saied, Faisal; Clark, Steve

    2009-01-01

    Recent improvements to existing HPC codes NEMO 3-D and OMEN, combined with access to peta-scale computing resources, have enabled realistic device engineering simulations that were previously infeasible. NEMO 3-D can now simulate 1 billion atom systems, and, using 3D spatial decomposition, scale to 32768 cores. Simulation time for the band structure of an experimental P doped Si quantum computing device fell from 40 minutes to 1 minute. OMEN can perform fully quantum mechanical transport calculations for real-word UTB FETs on 147,456 cores in roughly 5 minutes. Both of these tools power simulation engines on the nanoHUB, giving the community access to previously unavailable research capabilities.

  8. Advanced computational workflow for the multi-scale modeling of the bone metabolic processes.

    Science.gov (United States)

    Dao, Tien Tuan

    2017-06-01

    Multi-scale modeling of the musculoskeletal system plays an essential role in the deep understanding of complex mechanisms underlying the biological phenomena and processes such as bone metabolic processes. Current multi-scale models suffer from the isolation of sub-models at each anatomical scale. The objective of this present work was to develop a new fully integrated computational workflow for simulating bone metabolic processes at multi-scale levels. Organ-level model employs multi-body dynamics to estimate body boundary and loading conditions from body kinematics. Tissue-level model uses finite element method to estimate the tissue deformation and mechanical loading under body loading conditions. Finally, cell-level model includes bone remodeling mechanism through an agent-based simulation under tissue loading. A case study on the bone remodeling process located on the human jaw was performed and presented. The developed multi-scale model of the human jaw was validated using the literature-based data at each anatomical level. Simulation outcomes fall within the literature-based ranges of values for estimated muscle force, tissue loading and cell dynamics during bone remodeling process. This study opens perspectives for accurately simulating bone metabolic processes using a fully integrated computational workflow leading to a better understanding of the musculoskeletal system function from multiple length scales as well as to provide new informative data for clinical decision support and industrial applications.

  9. An accurate and computationally efficient small-scale nonlinear FEA of flexible risers

    OpenAIRE

    Rahmati, MT; Bahai, H; Alfano, G

    2016-01-01

    This paper presents a highly efficient small-scale, detailed finite-element modelling method for flexible risers which can be effectively implemented in a fully-nested (FE2) multiscale analysis based on computational homogenisation. By exploiting cyclic symmetry and applying periodic boundary conditions, only a small fraction of a flexible pipe is used for a detailed nonlinear finite-element analysis at the small scale. In this model, using three-dimensional elements, all layer components are...

  10. Multigrid preconditioned conjugate-gradient method for large-scale wave-front reconstruction.

    Science.gov (United States)

    Gilles, Luc; Vogel, Curtis R; Ellerbroek, Brent L

    2002-09-01

    We introduce a multigrid preconditioned conjugate-gradient (MGCG) iterative scheme for computing open-loop wave-front reconstructors for extreme adaptive optics systems. We present numerical simulations for a 17-m class telescope with n = 48756 sensor measurement grid points within the aperture, which indicate that our MGCG method has a rapid convergence rate for a wide range of subaperture average slope measurement signal-to-noise ratios. The total computational cost is of order n log n. Hence our scheme provides for fast wave-front simulation and control in large-scale adaptive optics systems.

  11. Highly Scalable Asynchronous Computing Method for Partial Differential Equations: A Path Towards Exascale

    Science.gov (United States)

    Konduri, Aditya

    Many natural and engineering systems are governed by nonlinear partial differential equations (PDEs) which result in a multiscale phenomena, e.g. turbulent flows. Numerical simulations of these problems are computationally very expensive and demand for extreme levels of parallelism. At realistic conditions, simulations are being carried out on massively parallel computers with hundreds of thousands of processing elements (PEs). It has been observed that communication between PEs as well as their synchronization at these extreme scales take up a significant portion of the total simulation time and result in poor scalability of codes. This issue is likely to pose a bottleneck in scalability of codes on future Exascale systems. In this work, we propose an asynchronous computing algorithm based on widely used finite difference methods to solve PDEs in which synchronization between PEs due to communication is relaxed at a mathematical level. We show that while stability is conserved when schemes are used asynchronously, accuracy is greatly degraded. Since message arrivals at PEs are random processes, so is the behavior of the error. We propose a new statistical framework in which we show that average errors drop always to first-order regardless of the original scheme. We propose new asynchrony-tolerant schemes that maintain accuracy when synchronization is relaxed. The quality of the solution is shown to depend, not only on the physical phenomena and numerical schemes, but also on the characteristics of the computing machine. A novel algorithm using remote memory access communications has been developed to demonstrate excellent scalability of the method for large-scale computing. Finally, we present a path to extend this method in solving complex multi-scale problems on Exascale machines.

  12. Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations

    Science.gov (United States)

    Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele

    2016-12-01

    Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.

  13. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    Science.gov (United States)

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  14. Entropy, extremality, euclidean variations, and the equations of motion

    Science.gov (United States)

    Dong, Xi; Lewkowycz, Aitor

    2018-01-01

    We study the Euclidean gravitational path integral computing the Rényi entropy and analyze its behavior under small variations. We argue that, in Einstein gravity, the extremality condition can be understood from the variational principle at the level of the action, without having to solve explicitly the equations of motion. This set-up is then generalized to arbitrary theories of gravity, where we show that the respective entanglement entropy functional needs to be extremized. We also extend this result to all orders in Newton's constant G N , providing a derivation of quantum extremality. Understanding quantum extremality for mixtures of states provides a generalization of the dual of the boundary modular Hamiltonian which is given by the bulk modular Hamiltonian plus the area operator, evaluated on the so-called modular extremal surface. This gives a bulk prescription for computing the relative entropies to all orders in G N . We also comment on how these ideas can be used to derive an integrated version of the equations of motion, linearized around arbitrary states.

  15. Determination of bone mineral density of the distal extremity of the radio in Rottweiller, by radiographic optic densitometry

    International Nuclear Information System (INIS)

    Alves, Jefferson Douglas Soares; Sterman, Franklin de Almeida

    2010-01-01

    This study allowed the standardization of the bone mineral density (BMD) of the distal extremity of the radio of 36 dogs adults in Rottweiler breed by radiographic optic densitometry. The limbs of the animals were radiographed with scale of aluminum that served as a reference. The radiographs images were digitalized and analyzed by a computer program for comparison of gray tones between the standard image and the image of the reference scale radiographed with the bone. Afterwards the values of density were expressed in millimeters of aluminum. Also studied the correlations between BMD and the sex, weight and external measures as the length of spine, height of the animal and circumference the distal extremity of the limb in study. The mean values and standard deviations of the bone mineral density of the distal extremity of the radio were: for the metaphyseal region the average of BMD of 7,88±0,89 mmAl, the diaphyseal region 1 the average of BMD of 8,58±0,80 mmAl and for diaphyseal region 2 of BMD of 9,00±0,74 mmAl. (author)

  16. Dimensional scaling for quasistationary states

    International Nuclear Information System (INIS)

    Kais, S.; Herschbach, D.R.

    1993-01-01

    Complex energy eigenvalues which specify the location and width of quasibound or resonant states are computed to good approximation by a simple dimensional scaling method. As applied to bound states, the method involves minimizing an effective potential function in appropriately scaled coordinates to obtain exact energies in the D→∞ limit, then computing approximate results for D=3 by a perturbation expansion in 1/D about this limit. For resonant states, the same procedure is used, with the radial coordinate now allowed to be complex. Five examples are treated: the repulsive exponential potential (e - r); a squelched harmonic oscillator (r 2 e - r); the inverted Kratzer potential (r -1 repulsion plus r -2 attraction); the Lennard-Jones potential (r -12 repulsion, r -6 attraction); and quasibound states for the rotational spectrum of the hydrogen molecule (X 1 summation g + , v=0, J=0 to 50). Comparisons with numerical integrations and other methods show that the much simpler dimensional scaling method, carried to second-order (terms in 1/D 2 ), yields good results over an extremely wide range of the ratio of level widths to spacings. Other methods have not yet evaluated the very broad H 2 rotational resonances reported here (J>39), which lie far above the centrifugal barrier

  17. Counting States of Near-Extremal Black Holes

    International Nuclear Information System (INIS)

    Horowitz, G.T.; Strominger, A.

    1996-01-01

    A six-dimensional black string is considered and its Bekenstein-Hawking entropy computed. It is shown that to leading order above extremality this entropy precisely counts the number of string states with the given energy and charges. This identification implies that Hawking decay of the near-extremal black string can be analyzed in string perturbation theory and is perturbatively unitary. copyright 1996 The American Physical Society

  18. Large-scale simulation of ductile fracture process of microstructured materials

    International Nuclear Information System (INIS)

    Tian Rong; Wang Chaowei

    2011-01-01

    The promise of computational science in the extreme-scale computing era is to reduce and decompose macroscopic complexities into microscopic simplicities with the expense of high spatial and temporal resolution of computing. In materials science and engineering, the direct combination of 3D microstructure data sets and 3D large-scale simulations provides unique opportunity for the development of a comprehensive understanding of nano/microstructure-property relationships in order to systematically design materials with specific desired properties. In the paper, we present a framework simulating the ductile fracture process zone in microstructural detail. The experimentally reconstructed microstructural data set is directly embedded into a FE mesh model to improve the simulation fidelity of microstructure effects on fracture toughness. To the best of our knowledge, it is for the first time that the linking of fracture toughness to multiscale microstructures in a realistic 3D numerical model in a direct manner is accomplished. (author)

  19. The multilevel fast multipole algorithm (MLFMA) for solving large-scale computational electromagnetics problems

    CERN Document Server

    Ergul, Ozgur

    2014-01-01

    The Multilevel Fast Multipole Algorithm (MLFMA) for Solving Large-Scale Computational Electromagnetic Problems provides a detailed and instructional overview of implementing MLFMA. The book: Presents a comprehensive treatment of the MLFMA algorithm, including basic linear algebra concepts, recent developments on the parallel computation, and a number of application examplesCovers solutions of electromagnetic problems involving dielectric objects and perfectly-conducting objectsDiscusses applications including scattering from airborne targets, scattering from red

  20. Evolution caused by extreme events.

    Science.gov (United States)

    Grant, Peter R; Grant, B Rosemary; Huey, Raymond B; Johnson, Marc T J; Knoll, Andrew H; Schmitt, Johanna

    2017-06-19

    Extreme events can be a major driver of evolutionary change over geological and contemporary timescales. Outstanding examples are evolutionary diversification following mass extinctions caused by extreme volcanism or asteroid impact. The evolution of organisms in contemporary time is typically viewed as a gradual and incremental process that results from genetic change, environmental perturbation or both. However, contemporary environments occasionally experience strong perturbations such as heat waves, floods, hurricanes, droughts and pest outbreaks. These extreme events set up strong selection pressures on organisms, and are small-scale analogues of the dramatic changes documented in the fossil record. Because extreme events are rare, almost by definition, they are difficult to study. So far most attention has been given to their ecological rather than to their evolutionary consequences. We review several case studies of contemporary evolution in response to two types of extreme environmental perturbations, episodic (pulse) or prolonged (press). Evolution is most likely to occur when extreme events alter community composition. We encourage investigators to be prepared for evolutionary change in response to rare events during long-term field studies.This article is part of the themed issue 'Behavioural, ecological and evolutionary responses to extreme climatic events'. © 2017 The Author(s).

  1. Quantitative analysis of scaling error compensation methods in dimensional X-ray computed tomography

    DEFF Research Database (Denmark)

    Müller, P.; Hiller, Jochen; Dai, Y.

    2015-01-01

    X-ray Computed Tomography (CT) has become an important technology for quality control of industrial components. As with other technologies, e.g., tactile coordinate measurements or optical measurements, CT is influenced by numerous quantities which may have negative impact on the accuracy...... errors of the manipulator system (magnification axis). This article also introduces a new compensation method for scaling errors using a database of reference scaling factors and discusses its advantages and disadvantages. In total, three methods for the correction of scaling errors – using the CT ball...

  2. Large-scale simulations of error-prone quantum computation devices

    International Nuclear Information System (INIS)

    Trieu, Doan Binh

    2009-01-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2±0.2) x 10 -6 . For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431±0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced technology, i

  3. Changes in daily climate extremes in China and their connection to the large scale atmospheric circulation during 1961-2003

    Energy Technology Data Exchange (ETDEWEB)

    You, Qinglong [Institute of Tibetan Plateau Research, Chinese Academy of Sciences (CAS), Laboratory of Tibetan Environment Changes and Land Surface Processes, Beijing (China); Friedrich-Schiller University Jena, Department of Geoinformatics, Jena (Germany); Graduate University of Chinese Academy of Sciences, Beijing (China); Kang, Shichang [Institute of Tibetan Plateau Research, Chinese Academy of Sciences (CAS), Laboratory of Tibetan Environment Changes and Land Surface Processes, Beijing (China); State Key Laboratory of Cryospheric Science, Chinese Academy of Sciences, Lanzhou (China); Aguilar, Enric [Universitat Rovirai Virgili de Tarragona, Climate Change Research Group, Geography Unit, Tarragona (Spain); Pepin, Nick [University of Portsmouth, Department of Geography, Portsmouth (United Kingdom); Fluegel, Wolfgang-Albert [Friedrich-Schiller University Jena, Department of Geoinformatics, Jena (Germany); Yan, Yuping [National Climate Center, Beijing (China); Xu, Yanwei; Huang, Jie [Institute of Tibetan Plateau Research, Chinese Academy of Sciences (CAS), Laboratory of Tibetan Environment Changes and Land Surface Processes, Beijing (China); Graduate University of Chinese Academy of Sciences, Beijing (China); Zhang, Yongjun [Institute of Tibetan Plateau Research, Chinese Academy of Sciences (CAS), Laboratory of Tibetan Environment Changes and Land Surface Processes, Beijing (China)

    2011-06-15

    negative magnitudes. This is inconsistent with changes of water vapor flux calculated from NCEP/NCAR reanalysis. Large scale atmospheric circulation changes derived from NCEP/NCAR reanalysis grids show that a strengthening anticyclonic circulation, increasing geopotential height and rapid warming over the Eurasian continent have contributed to the changes in climate extremes in China. (orig.)

  4. A Generalized Framework for Non-Stationary Extreme Value Analysis

    Science.gov (United States)

    Ragno, E.; Cheng, L.; Sadegh, M.; AghaKouchak, A.

    2017-12-01

    Empirical trends in climate variables including precipitation, temperature, snow-water equivalent at regional to continental scales are evidence of changes in climate over time. The evolving climate conditions and human activity-related factors such as urbanization and population growth can exert further changes in weather and climate extremes. As a result, the scientific community faces an increasing demand for updated appraisal of the time-varying climate extremes. The purpose of this study is to offer a robust and flexible statistical tool for non-stationary extreme value analysis which can better characterize the severity and likelihood of extreme climatic variables. This is critical to ensure a more resilient environment in a changing climate. Following the positive feedback on the first version of Non-Stationary Extreme Value Analysis (NEVA) Toolbox by Cheng at al. 2014, we present an improved version, i.e. NEVA2.0. The upgraded version herein builds upon a newly-developed hybrid evolution Markov Chain Monte Carlo (MCMC) approach for numerical parameters estimation and uncertainty assessment. This addition leads to a more robust uncertainty estimates of return levels, return periods, and risks of climatic extremes under both stationary and non-stationary assumptions. Moreover, NEVA2.0 is flexible in incorporating any user-specified covariate other than the default time-covariate (e.g., CO2 emissions, large scale climatic oscillation patterns). The new feature will allow users to examine non-stationarity of extremes induced by physical conditions that underlie the extreme events (e.g. antecedent soil moisture deficit, large-scale climatic teleconnections, urbanization). In addition, the new version offers an option to generate stationary and/or non-stationary rainfall Intensity - Duration - Frequency (IDF) curves that are widely used for risk assessment and infrastructure design. Finally, a Graphical User Interface (GUI) of the package is provided, making NEVA

  5. Expected impacts of climate change on extreme climate events

    International Nuclear Information System (INIS)

    Planton, S.; Deque, M.; Chauvin, F.; Terray, L.

    2008-01-01

    An overview of the expected change of climate extremes during this century due to greenhouse gases and aerosol anthropogenic emissions is presented. The most commonly used methodologies rely on the dynamical or statistical down-scaling of climate projections, performed with coupled atmosphere-ocean general circulation models. Either of dynamical or of statistical type, down-scaling methods present strengths and weaknesses, but neither their validation on present climate conditions, nor their potential ability to project the impact of climate change on extreme event statistics allows one to give a specific advantage to one of the two types. The results synthesized in the last IPCC report and more recent studies underline a convergence for a very likely increase in heat wave episodes over land surfaces, linked to the mean warming and the increase in temperature variability. In addition, the number of days of frost should decrease and the growing season length should increase. The projected increase in heavy precipitation events appears also as very likely over most areas and also seems linked to a change in the shape of the precipitation intensity distribution. The global trends for drought duration are less consistent between models and down-scaling methodologies, due to their regional variability. The change of wind-related extremes is also regionally dependent, and associated to a poleward displacement of the mid-latitude storm tracks. The specific study of extreme events over France reveals the high sensitivity of some statistics of climate extremes at the decadal time scale as a consequence of regional climate internal variability. (authors)

  6. Numerical modelling of extreme waves by Smoothed Particle Hydrodynamics

    Directory of Open Access Journals (Sweden)

    M. H. Dao

    2011-02-01

    Full Text Available The impact of extreme/rogue waves can lead to serious damage of vessels as well as marine and coastal structures. Such extreme waves in deep water are characterized by steep wave fronts and an energetic wave crest. The process of wave breaking is highly complex and, apart from the general knowledge that impact loadings are highly impulsive, the dynamics of the breaking and impact are still poorly understood. Using an advanced numerical method, the Smoothed Particle Hydrodynamics enhanced with parallel computing is able to reproduce well the extreme waves and their breaking process. Once the waves and their breaking process are modelled successfully, the dynamics of the breaking and the characteristics of their impact on offshore structures could be studied. The computational methodology and numerical results are presented in this paper.

  7. Materials and nanosystems : interdisciplinary computational modeling at multiple scales

    International Nuclear Information System (INIS)

    Huber, S.E.

    2014-01-01

    Over the last five decades, computer simulation and numerical modeling have become valuable tools complementing the traditional pillars of science, experiment and theory. In this thesis, several applications of computer-based simulation and modeling shall be explored in order to address problems and open issues in chemical and molecular physics. Attention shall be paid especially to the different degrees of interrelatedness and multiscale-flavor, which may - at least to some extent - be regarded as inherent properties of computational chemistry. In order to do so, a variety of computational methods are used to study features of molecular systems which are of relevance in various branches of science and which correspond to different spatial and/or temporal scales. Proceeding from small to large measures, first, an application in astrochemistry, the investigation of spectroscopic and energetic aspects of carbonic acid isomers shall be discussed. In this respect, very accurate and hence at the same time computationally very demanding electronic structure methods like the coupled-cluster approach are employed. These studies are followed by the discussion of an application in the scope of plasma-wall interaction which is related to nuclear fusion research. There, the interactions of atoms and molecules with graphite surfaces are explored using density functional theory methods. The latter are computationally cheaper than coupled-cluster methods and thus allow the treatment of larger molecular systems, but yield less accuracy and especially reduced error control at the same time. The subsequently presented exploration of surface defects at low-index polar zinc oxide surfaces, which are of interest in materials science and surface science, is another surface science application. The necessity to treat even larger systems of several hundreds of atoms requires the use of approximate density functional theory methods. Thin gold nanowires consisting of several thousands of

  8. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  9. Large Scale Computing and Storage Requirements for High Energy Physics

    International Nuclear Information System (INIS)

    Gerber, Richard A.; Wasserman, Harvey

    2010-01-01

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  10. Large-scale computer networks and the future of legal knowledge-based systems

    NARCIS (Netherlands)

    Leenes, R.E.; Svensson, Jorgen S.; Hage, J.C.; Bench-Capon, T.J.M.; Cohen, M.J.; van den Herik, H.J.

    1995-01-01

    In this paper we investigate the relation between legal knowledge-based systems and large-scale computer networks such as the Internet. On the one hand, researchers of legal knowledge-based systems have claimed huge possibilities, but despite the efforts over the last twenty years, the number of

  11. Computational intelligence-based optimization of maximally stable extremal region segmentation for object detection

    Science.gov (United States)

    Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.

    2017-05-01

    Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.

  12. Statistical Model of Extreme Shear

    DEFF Research Database (Denmark)

    Hansen, Kurt Schaldemose; Larsen, Gunner Chr.

    2005-01-01

    In order to continue cost-optimisation of modern large wind turbines, it is important to continuously increase the knowledge of wind field parameters relevant to design loads. This paper presents a general statistical model that offers site-specific prediction of the probability density function...... by a model that, on a statistically consistent basis, describes the most likely spatial shape of an extreme wind shear event. Predictions from the model have been compared with results from an extreme value data analysis, based on a large number of full-scale measurements recorded with a high sampling rate...

  13. Large Scale Computing for the Modelling of Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon

    organization of the brain in continuously increasing resolution. From these images, networks of structural and functional connectivity can be constructed. Bayesian stochastic block modelling provides a prominent data-driven approach for uncovering the latent organization, by clustering the networks into groups...... of neurons. Relying on Markov Chain Monte Carlo (MCMC) simulations as the workhorse in Bayesian inference however poses significant computational challenges, especially when modelling networks at the scale and complexity supported by high-resolution whole-brain MRI. In this thesis, we present how to overcome...... these computational limitations and apply Bayesian stochastic block models for un-supervised data-driven clustering of whole-brain connectivity in full image resolution. We implement high-performance software that allows us to efficiently apply stochastic blockmodelling with MCMC sampling on large complex networks...

  14. High-resolution stochastic generation of extreme rainfall intensity for urban drainage modelling applications

    Science.gov (United States)

    Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2016-04-01

    Urban drainage response is highly dependent on the spatial and temporal structure of rainfall. Therefore, measuring and simulating rainfall at a high spatial and temporal resolution is a fundamental step to fully assess urban drainage system reliability and related uncertainties. This is even more relevant when considering extreme rainfall events. However, the current space-time rainfall models have limitations in capturing extreme rainfall intensity statistics for short durations. Here, we use the STREAP (Space-Time Realizations of Areal Precipitation) model, which is a novel stochastic rainfall generator for simulating high-resolution rainfall fields that preserve the spatio-temporal structure of rainfall and its statistical characteristics. The model enables a generation of rain fields at 102 m and minute scales in a fast and computer-efficient way matching the requirements for hydrological analysis of urban drainage systems. The STREAP model was applied successfully in the past to generate high-resolution extreme rainfall intensities over a small domain. A sub-catchment in the city of Luzern (Switzerland) was chosen as a case study to: (i) evaluate the ability of STREAP to disaggregate extreme rainfall intensities for urban drainage applications; (ii) assessing the role of stochastic climate variability of rainfall in flow response and (iii) evaluate the degree of non-linearity between extreme rainfall intensity and system response (i.e. flow) for a small urban catchment. The channel flow at the catchment outlet is simulated by means of a calibrated hydrodynamic sewer model.

  15. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment

  16. Neural Computations in a Dynamical System with Multiple Time Scales.

    Science.gov (United States)

    Mi, Yuanyuan; Lin, Xiaohan; Wu, Si

    2016-01-01

    Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions.

  17. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Wasserman, Harvey

    2011-03-31

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility supporting research within the Department of Energy's Office of Science. NERSC provides high-performance computing (HPC) resources to approximately 4,000 researchers working on about 400 projects. In addition to hosting large-scale computing facilities, NERSC provides the support and expertise scientists need to effectively and efficiently use HPC systems. In February 2010, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR) and DOE's Office of Basic Energy Sciences (BES) held a workshop to characterize HPC requirements for BES research through 2013. The workshop was part of NERSC's legacy of anticipating users future needs and deploying the necessary resources to meet these demands. Workshop participants reached a consensus on several key findings, in addition to achieving the workshop's goal of collecting and characterizing computing requirements. The key requirements for scientists conducting research in BES are: (1) Larger allocations of computational resources; (2) Continued support for standard application software packages; (3) Adequate job turnaround time and throughput; and (4) Guidance and support for using future computer architectures. This report expands upon these key points and presents others. Several 'case studies' are included as significant representative samples of the needs of science teams within BES. Research teams scientific goals, computational methods of solution, current and 2013 computing requirements, and special software and support needs are summarized in these case studies. Also included are researchers strategies for computing in the highly parallel, 'multi-core' environment that is expected to dominate HPC architectures over the next few years. NERSC has strategic plans and initiatives already underway that address key workshop findings. This report includes a

  18. Modulation of extreme temperatures in Europe under extreme values of the North Atlantic Oscillation Index.

    Science.gov (United States)

    Beniston, Martin

    2018-03-10

    This paper reports on the influence that extreme values in the tails of the North Atlantic Oscillation (NAO) Index probability density function (PDF) can exert on temperatures in Europe. When the NAO Index enters into its lowest (10% quantile or less) and highest (90% quantile or higher) modes, European temperatures often exhibit large negative or positive departures from their mean values, respectively. Analyses of the joint quantiles of the Index and temperatures (i.e., the simultaneous exceedance of particular quantile thresholds by the two variables) show that temperatures enter into the upper or lower tails of their PDF when the NAO Index also enters into its extreme tails, more often that could be expected from random statistics. Studies of this nature help further our understanding of the manner by which mechanisms of decadal-scale climate variability can influence extremes of temperature-and thus perhaps improve the forecasting of extreme temperatures in weather and climate models. © 2018 New York Academy of Sciences.

  19. Large-scale simulations of error-prone quantum computation devices

    Energy Technology Data Exchange (ETDEWEB)

    Trieu, Doan Binh

    2009-07-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2{+-}0.2) x 10{sup -6}. For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431{+-}0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced

  20. Multifractal Conceptualisation of Hydro-Meteorological Extremes

    Science.gov (United States)

    Tchiguirinskaia, I.; Schertzer, D.; Lovejoy, S.

    2009-04-01

    Hydrology and more generally sciences involved in water resources management, technological or operational developments face a fundamental difficulty: the extreme variability of hydro-meteorological fields. It clearly appears today that this variability is a function of the observation scale and yield hydro-meteorological hazards. Throughout the world, the development of multifractal theory offers new techniques for handling such non-classical variability over wide ranges of time and space scales. The resulting stochastic simulations with a very limited number of parameters well reproduce the long range dependencies and the clustering of rainfall extremes often yielding fat tailed (i.e., an algebraic type) probability distributions. The goal of this work was to investigate the ability of using very short or incomplete data records for reliable statistical predictions of the extremes. In particular we discuss how to evaluate the uncertainty in the empirical or semi-analytical multifractal outcomes. We consider three main aspects of the evaluation, such as the scaling adequacy, the multifractal parameter estimation error and the quantile estimation error. We first use the multiplicative cascade model to generate long series of multifractal data. The simulated samples had to cover the range of the universal multifractal parameters widely available in the scientific literature for the rainfall and river discharges. Using these long multifractal series and their sub-samples, we defined a metric for parameter estimation error. Then using the sets of estimated parameters, we obtained the quantile values for a range of excedance probabilities from 5% to 0.01%. Plotting the error bars on a quantile plot enable an approximation of confidence intervals that would be particularly important for the predictions of multifractal extremes. We finally illustrate the efficiency of such concept on its application to a large database (more than 16000 selected stations over USA and

  1. Nutrition security under extreme events

    Science.gov (United States)

    Martinez, A.

    2017-12-01

    Nutrition security under extreme events. Zero hunger being one of the Sustainable Development Goal from the United Nations, food security has become a trending research topic. However extreme events impact on global food security is not yet 100% understood and there is a lack of comprehension of the underlying mechanisms of global food trade and nutrition security to improve countries resilience to extreme events. In a globalized world, food is still a highly regulated commodity and a strategic resource. A drought happening in a net food-exporter will have little to no effect on its own population but the repercussion on net food-importers can be extreme. In this project, we propose a methodology to describe and quantify the impact of a local drought to human health at a global scale. For this purpose, nutrition supply and global trade data from FAOSTAT have been used with domestic food production from national agencies and FAOSTAT, global precipitation from the Climate Research Unit and health data from the World Health Organization. A modified Herfindahl-Hirschman Index (HHI) has been developed to measure the level of resilience of one country to a drought happening in another country. This index describes how a country is dependent of importation and how diverse are its importation. Losses of production and exportation due to extreme events have been calculated using yield data and a simple food balance at country scale. Results show that countries the most affected by global droughts are the one with the highest dependency to one exporting country. Changes induced by droughts also disturbed their domestic proteins, fat and calories supply resulting most of the time in a higher intake of calories or fat over proteins.

  2. Extremely Randomized Machine Learning Methods for Compound Activity Prediction

    Directory of Open Access Journals (Sweden)

    Wojciech M. Czarnecki

    2015-11-01

    Full Text Available Speed, a relatively low requirement for computational resources and high effectiveness of the evaluation of the bioactivity of compounds have caused a rapid growth of interest in the application of machine learning methods to virtual screening tasks. However, due to the growth of the amount of data also in cheminformatics and related fields, the aim of research has shifted not only towards the development of algorithms of high predictive power but also towards the simplification of previously existing methods to obtain results more quickly. In the study, we tested two approaches belonging to the group of so-called ‘extremely randomized methods’—Extreme Entropy Machine and Extremely Randomized Trees—for their ability to properly identify compounds that have activity towards particular protein targets. These methods were compared with their ‘non-extreme’ competitors, i.e., Support Vector Machine and Random Forest. The extreme approaches were not only found out to improve the efficiency of the classification of bioactive compounds, but they were also proved to be less computationally complex, requiring fewer steps to perform an optimization procedure.

  3. Visual analysis of inter-process communication for large-scale parallel computing.

    Science.gov (United States)

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  4. A Decade-Long European-Scale Convection-Resolving Climate Simulation on GPUs

    Science.gov (United States)

    Leutwyler, D.; Fuhrer, O.; Ban, N.; Lapillonne, X.; Lüthi, D.; Schar, C.

    2016-12-01

    Convection-resolving models have proven to be very useful tools in numerical weather prediction and in climate research. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in the supercomputing domain have led to new supercomputer designs that involve conventional multi-core CPUs and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to GPUs is the Consortium for Small-Scale Modeling weather and climate model COSMO. This new version allows us to expand the size of the simulation domain to areas spanning continents and the time period up to one decade. We present results from a decade-long, convection-resolving climate simulation over Europe using the GPU-enabled COSMO version on a computational domain with 1536x1536x60 gridpoints. The simulation is driven by the ERA-interim reanalysis. The results illustrate how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. We discuss some of the advantages and prospects from using GPUs, and focus on the performance of the convection-resolving modeling approach on the European scale. Specifically we investigate the organization of convective clouds and on validate hourly rainfall distributions with various high-resolution data sets.

  5. Google Earth Engine: a new cloud-computing platform for global-scale earth observation data and analysis

    Science.gov (United States)

    Moore, R. T.; Hansen, M. C.

    2011-12-01

    Google Earth Engine is a new technology platform that enables monitoring and measurement of changes in the earth's environment, at planetary scale, on a large catalog of earth observation data. The platform offers intrinsically-parallel computational access to thousands of computers in Google's data centers. Initial efforts have focused primarily on global forest monitoring and measurement, in support of REDD+ activities in the developing world. The intent is to put this platform into the hands of scientists and developing world nations, in order to advance the broader operational deployment of existing scientific methods, and strengthen the ability for public institutions and civil society to better understand, manage and report on the state of their natural resources. Earth Engine currently hosts online nearly the complete historical Landsat archive of L5 and L7 data collected over more than twenty-five years. Newly-collected Landsat imagery is downloaded from USGS EROS Center into Earth Engine on a daily basis. Earth Engine also includes a set of historical and current MODIS data products. The platform supports generation, on-demand, of spatial and temporal mosaics, "best-pixel" composites (for example to remove clouds and gaps in satellite imagery), as well as a variety of spectral indices. Supervised learning methods are available over the Landsat data catalog. The platform also includes a new application programming framework, or "API", that allows scientists access to these computational and data resources, to scale their current algorithms or develop new ones. Under the covers of the Google Earth Engine API is an intrinsically-parallel image-processing system. Several forest monitoring applications powered by this API are currently in development and expected to be operational in 2011. Combining science with massive data and technology resources in a cloud-computing framework can offer advantages of computational speed, ease-of-use and collaboration, as

  6. Improving Large-scale Storage System Performance via Topology-aware and Balanced Data Placement

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Feiyi [ORNL; Oral, H Sarp [ORNL; Vazhkudai, Sudharshan S [ORNL

    2014-01-01

    With the advent of big data, the I/O subsystems of large-scale compute clusters are becoming a center of focus, with more applications putting greater demands on end-to-end I/O performance. These subsystems are often complex in design. They comprise of multiple hardware and software layers to cope with the increasing capacity, capability and scalability requirements of data intensive applications. The sharing nature of storage resources and the intrinsic interactions across these layers make it to realize user-level, end-to-end performance gains a great challenge. We propose a topology-aware resource load balancing strategy to improve per-application I/O performance. We demonstrate the effectiveness of our algorithm on an extreme-scale compute cluster, Titan, at the Oak Ridge Leadership Computing Facility (OLCF). Our experiments with both synthetic benchmarks and a real-world application show that, even under congestion, our proposed algorithm can improve large-scale application I/O performance significantly, resulting in both the reduction of application run times and higher resolution simulation runs.

  7. The NASA Energy and Water Cycle Extreme (NEWSE) Integration Project

    Science.gov (United States)

    House, P. R.; Lapenta, W.; Schiffer, R.

    2008-01-01

    Skillful predictions of water and energy cycle extremes (flood and drought) are elusive. To better understand the mechanisms responsible for water and energy extremes, and to make decisive progress in predicting these extremes, the collaborative NASA Energy and Water cycle Extremes (NEWSE) Integration Project, is studying these extremes in the U.S. Southern Great Plains (SGP) during 2006-2007, including their relationships with continental and global scale processes, and assessment of their predictability on multiple space and time scales. It is our hypothesis that an integrative analysis of observed extremes which reflects the current understanding of the role of SST and soil moisture variability influences on atmospheric heating and forcing of planetary waves, incorporating recently available global and regional hydro- meteorological datasets (i.e., precipitation, water vapor, clouds, etc.) in conjunction with advances in data assimilation, can lead to new insights into the factors that lead to persistent drought and flooding. We will show initial results of this project, whose goals are to provide an improved definition, attribution and prediction on sub-seasonal to interannual time scales, improved understanding of the mechanisms of decadal drought and its predictability, including the impacts of SST variability and deep soil moisture variability, and improved monitoring/attributions, with transition to applications; a bridging of the gap between hydrological forecasts and stakeholders (utilization of probabilistic forecasts, education, forecast interpretation for different sectors, assessment of uncertainties for different sectors, etc.).

  8. Computational Fluid Dynamics for nuclear applications: from CFD to multi-scale CMFD

    International Nuclear Information System (INIS)

    Yadigaroglu, G.

    2005-01-01

    New trends in computational methods for nuclear reactor thermal-hydraulics are discussed; traditionally, these have been based on the two-fluid model. Although CFD computations for single phase flows are commonplace, Computational Multi-Fluid Dynamics (CMFD) is still under development. One-fluid methods coupled with interface tracking techniques provide interesting opportunities and enlarge the scope of problems that can be solved. For certain problems, one may have to conduct 'cascades' of computations at increasingly finer scales to resolve all issues. The case study of condensation of steam/air mixtures injected from a downward-facing vent into a pool of water and a proposed CMFD initiative to numerically model Critical Heat Flux (CHF) illustrate such cascades. For the venting problem, a variety of tools are used: a system code for system behaviour; an interface-tracking method (Volume of Fluid, VOF) to examine the behaviour of large bubbles; direct-contact condensation can be treated either by Direct Numerical Simulation (DNS) or by analytical methods

  9. Computational Fluid Dynamics for nuclear applications: from CFD to multi-scale CMFD

    Energy Technology Data Exchange (ETDEWEB)

    Yadigaroglu, G. [Swiss Federal Institute of Technology-Zurich (ETHZ), Nuclear Engineering Laboratory, ETH-Zentrum, CLT CH-8092 Zurich (Switzerland)]. E-mail: yadi@ethz.ch

    2005-02-01

    New trends in computational methods for nuclear reactor thermal-hydraulics are discussed; traditionally, these have been based on the two-fluid model. Although CFD computations for single phase flows are commonplace, Computational Multi-Fluid Dynamics (CMFD) is still under development. One-fluid methods coupled with interface tracking techniques provide interesting opportunities and enlarge the scope of problems that can be solved. For certain problems, one may have to conduct 'cascades' of computations at increasingly finer scales to resolve all issues. The case study of condensation of steam/air mixtures injected from a downward-facing vent into a pool of water and a proposed CMFD initiative to numerically model Critical Heat Flux (CHF) illustrate such cascades. For the venting problem, a variety of tools are used: a system code for system behaviour; an interface-tracking method (Volume of Fluid, VOF) to examine the behaviour of large bubbles; direct-contact condensation can be treated either by Direct Numerical Simulation (DNS) or by analytical methods.

  10. Fan-out Estimation in Spin-based Quantum Computer Scale-up.

    Science.gov (United States)

    Nguyen, Thien; Hill, Charles D; Hollenberg, Lloyd C L; James, Matthew R

    2017-10-17

    Solid-state spin-based qubits offer good prospects for scaling based on their long coherence times and nexus to large-scale electronic scale-up technologies. However, high-threshold quantum error correction requires a two-dimensional qubit array operating in parallel, posing significant challenges in fabrication and control. While architectures incorporating distributed quantum control meet this challenge head-on, most designs rely on individual control and readout of all qubits with high gate densities. We analysed the fan-out routing overhead of a dedicated control line architecture, basing the analysis on a generalised solid-state spin qubit platform parameterised to encompass Coulomb confined (e.g. donor based spin qubits) or electrostatically confined (e.g. quantum dot based spin qubits) implementations. The spatial scalability under this model is estimated using standard electronic routing methods and present-day fabrication constraints. Based on reasonable assumptions for qubit control and readout we estimate 10 2 -10 5 physical qubits, depending on the quantum interconnect implementation, can be integrated and fanned-out independently. Assuming relatively long control-free interconnects the scalability can be extended. Ultimately, the universal quantum computation may necessitate a much higher number of integrated qubits, indicating that higher dimensional electronics fabrication and/or multiplexed distributed control and readout schemes may be the preferredstrategy for large-scale implementation.

  11. Vortex-Concept for Radioactivity Release Prevention at NPP: Development of Computational Model of Lab-Scale Experimental Setup

    Energy Technology Data Exchange (ETDEWEB)

    Ullah, Sana; Sung, Yim Man; Park, Jin Soo; Sung Hyung Jin [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The experimental validation of the vortex-like air curtain concept and use of an appropriate CFD modelling approach for analyzing the problem becomes crucial. A lab-scale experimental setup is designed to validate the proposed concept and CFD modeling approach as a part of validation process. In this study, a computational model of this lab-scale experiment setup is developed using open source CFD code OpenFOAM. The computational results will be compared with experimental data for validation purposes in future, when experimental data is available. 1) A computation model of a lab-scale experimental setup, designed to validate the concept of artificial vortex-like airflow generation for application to radioactivity dispersion prevention in the event of severe accident, was developed. 2) The mesh sensitivity study was performed and a mesh of about 2 million cells was found to be sufficient for this setup.

  12. Interactive Computer Graphics

    Science.gov (United States)

    Kenwright, David

    2000-01-01

    Aerospace data analysis tools that significantly reduce the time and effort needed to analyze large-scale computational fluid dynamics simulations have emerged this year. The current approach for most postprocessing and visualization work is to explore the 3D flow simulations with one of a dozen or so interactive tools. While effective for analyzing small data sets, this approach becomes extremely time consuming when working with data sets larger than one gigabyte. An active area of research this year has been the development of data mining tools that automatically search through gigabyte data sets and extract the salient features with little or no human intervention. With these so-called feature extraction tools, engineers are spared the tedious task of manually exploring huge amounts of data to find the important flow phenomena. The software tools identify features such as vortex cores, shocks, separation and attachment lines, recirculation bubbles, and boundary layers. Some of these features can be extracted in a few seconds; others take minutes to hours on extremely large data sets. The analysis can be performed off-line in a batch process, either during or following the supercomputer simulations. These computations have to be performed only once, because the feature extraction programs search the entire data set and find every occurrence of the phenomena being sought. Because the important questions about the data are being answered automatically, interactivity is less critical than it is with traditional approaches.

  13. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  14. Computability, complexity, and languages fundamentals of theoretical computer science

    CERN Document Server

    Davis, Martin D; Rheinboldt, Werner

    1983-01-01

    Computability, Complexity, and Languages: Fundamentals of Theoretical Computer Science provides an introduction to the various aspects of theoretical computer science. Theoretical computer science is the mathematical study of models of computation. This text is composed of five parts encompassing 17 chapters, and begins with an introduction to the use of proofs in mathematics and the development of computability theory in the context of an extremely simple abstract programming language. The succeeding parts demonstrate the performance of abstract programming language using a macro expa

  15. Large-scale computational drug repositioning to find treatments for rare diseases.

    Science.gov (United States)

    Govindaraj, Rajiv Gandhi; Naderi, Misagh; Singha, Manali; Lemoine, Jeffrey; Brylinski, Michal

    2018-01-01

    Rare, or orphan, diseases are conditions afflicting a small subset of people in a population. Although these disorders collectively pose significant health care problems, drug companies require government incentives to develop drugs for rare diseases due to extremely limited individual markets. Computer-aided drug repositioning, i.e., finding new indications for existing drugs, is a cheaper and faster alternative to traditional drug discovery offering a promising venue for orphan drug research. Structure-based matching of drug-binding pockets is among the most promising computational techniques to inform drug repositioning. In order to find new targets for known drugs ultimately leading to drug repositioning, we recently developed e MatchSite, a new computer program to compare drug-binding sites. In this study, e MatchSite is combined with virtual screening to systematically explore opportunities to reposition known drugs to proteins associated with rare diseases. The effectiveness of this integrated approach is demonstrated for a kinase inhibitor, which is a confirmed candidate for repositioning to synapsin Ia. The resulting dataset comprises 31,142 putative drug-target complexes linked to 980 orphan diseases. The modeling accuracy is evaluated against the structural data recently released for tyrosine-protein kinase HCK. To illustrate how potential therapeutics for rare diseases can be identified, we discuss a possibility to repurpose a steroidal aromatase inhibitor to treat Niemann-Pick disease type C. Overall, the exhaustive exploration of the drug repositioning space exposes new opportunities to combat orphan diseases with existing drugs. DrugBank/Orphanet repositioning data are freely available to research community at https://osf.io/qdjup/.

  16. Parsimonious Wavelet Kernel Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Wang Qin

    2015-11-01

    Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.

  17. Sensitivity of Rainfall Extremes Under Warming Climate in Urban India

    Science.gov (United States)

    Ali, H.; Mishra, V.

    2017-12-01

    Extreme rainfall events in urban India halted transportation, damaged infrastructure, and affected human lives. Rainfall extremes are projected to increase under the future climate. We evaluated the relationship (scaling) between rainfall extremes at different temporal resolutions (daily, 3-hourly, and 30 minutes), daily dewpoint temperature (DPT) and daily air temperature at 850 hPa (T850) for 23 urban areas in India. Daily rainfall extremes obtained from Global Surface Summary of Day Data (GSOD) showed positive regression slopes for most of the cities with median of 14%/K for the period of 1979-2013 for DPT and T850, which is higher than Clausius-Clapeyron (C-C) rate ( 7%). Moreover, sub-daily rainfall extremes are more sensitive to both DPT and T850. For instance, 3-hourly rainfall extremes obtained from Tropical Rainfall Measurement Mission (TRMM 3B42 V7) showed regression slopes more than 16%/K aginst DPT and T850 for the period of 1998-2015. Half-hourly rainfall extremes from the Integrated Multi-satellitE Retrievals (IMERGE) of Global precipitation mission (GPM) also showed higher sensitivity against changes in DPT and T850. The super scaling of rainfall extremes against changes in DPT and T850 can be attributed to convective nature of precipitation in India. Our results show that urban India may witness non-stationary rainfall extremes, which, in turn will affect stromwater designs and frequency and magniture of urban flooding.

  18. Computational psychotherapy research: scaling up the evaluation of patient-provider interactions.

    Science.gov (United States)

    Imel, Zac E; Steyvers, Mark; Atkins, David C

    2015-03-01

    In psychotherapy, the patient-provider interaction contains the treatment's active ingredients. However, the technology for analyzing the content of this interaction has not fundamentally changed in decades, limiting both the scale and specificity of psychotherapy research. New methods are required to "scale up" to larger evaluation tasks and "drill down" into the raw linguistic data of patient-therapist interactions. In the current article, we demonstrate the utility of statistical text analysis models called topic models for discovering the underlying linguistic structure in psychotherapy. Topic models identify semantic themes (or topics) in a collection of documents (here, transcripts). We used topic models to summarize and visualize 1,553 psychotherapy and drug therapy (i.e., medication management) transcripts. Results showed that topic models identified clinically relevant content, including affective, relational, and intervention related topics. In addition, topic models learned to identify specific types of therapist statements associated with treatment-related codes (e.g., different treatment approaches, patient-therapist discussions about the therapeutic relationship). Visualizations of semantic similarity across sessions indicate that topic models identify content that discriminates between broad classes of therapy (e.g., cognitive-behavioral therapy vs. psychodynamic therapy). Finally, predictive modeling demonstrated that topic model-derived features can classify therapy type with a high degree of accuracy. Computational psychotherapy research has the potential to scale up the study of psychotherapy to thousands of sessions at a time. We conclude by discussing the implications of computational methods such as topic models for the future of psychotherapy research and practice. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  19. Seasonal temperature extremes in Potsdam

    Science.gov (United States)

    Kundzewicz, Zbigniew; Huang, Shaochun

    2010-12-01

    The awareness of global warming is well established and results from the observations made on thousands of stations. This paper complements the large-scale results by examining a long time-series of high-quality temperature data from the Secular Meteorological Station in Potsdam, where observation records over the last 117 years, i.e., from January 1893 are available. Tendencies of change in seasonal temperature-related climate extremes are demonstrated. "Cold" extremes have become less frequent and less severe than in the past, while "warm" extremes have become more frequent and more severe. Moreover, the interval of the occurrence of frost has been decreasing, while the interval of the occurrence of hot days has been increasing. However, many changes are not statistically significant, since the variability of temperature indices at the Potsdam station has been very strong.

  20. Large scale inverse problems computational methods and applications in the earth sciences

    CERN Document Server

    Scheichl, Robert; Freitag, Melina A; Kindermann, Stefan

    2013-01-01

    This book is thesecond volume of three volume series recording the ""Radon Special Semester 2011 on Multiscale Simulation & Analysis in Energy and the Environment"" taking place in Linz, Austria, October 3-7, 2011. The volume addresses the common ground in the mathematical and computational procedures required for large-scale inverse problems and data assimilation in forefront applications.

  1. Stochastic generation of multi-site daily precipitation focusing on extreme events

    Directory of Open Access Journals (Sweden)

    G. Evin

    2018-01-01

    Full Text Available Many multi-site stochastic models have been proposed for the generation of daily precipitation, but they generally focus on the reproduction of low to high precipitation amounts at the stations concerned. This paper proposes significant extensions to the multi-site daily precipitation model introduced by Wilks, with the aim of reproducing the statistical features of extremely rare events (in terms of frequency and magnitude at different temporal and spatial scales. In particular, the first extended version integrates heavy-tailed distributions, spatial tail dependence, and temporal dependence in order to obtain a robust and appropriate representation of the most extreme precipitation fields. A second version enhances the first version using a disaggregation method. The performance of these models is compared at different temporal and spatial scales on a large region covering approximately half of Switzerland. While daily extremes are adequately reproduced at the stations by all models, including the benchmark Wilks version, extreme precipitation amounts at larger temporal scales (e.g., 3-day amounts are clearly underestimated when temporal dependence is ignored.

  2. Medium/small-scale computers HITACHI M-620, M-630, and M-640 systems: the aim of development and characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Oshima, N; Saiki, Y; Sunaga, K [Hitachi, Ltd., Tokyo (Japan)

    1990-10-01

    The medium/small-scale HITACHI M-620, M-630, and M-640 computer systems are outlined. Every system is featured by the configuration usable as a medium or small-scale host computer in offices, the function connectable with large-scale host computers, the performance of 5-50 times those of conventional office computers, easy operation and fast processing. As features of the hardware, the one-board CPU and small integrated cubicle structure containing the CPU board, high-speed large-capacity magnetic disk storage device, various kinds of controllers and others are illustrated. As features of the software, the OS (VOS K) featured by the virtual data space control (VDSA) and relational database (RDB) functions, EAGLE/4GL (effective approach to achieving high level software productivity/4th generation language), STEP (self training environmental support program) and simple end user language ACE3/E2 are outlined. 7 figs.

  3. Computational optimization of catalyst distributions at the nano-scale

    International Nuclear Information System (INIS)

    Ström, Henrik

    2017-01-01

    Highlights: • Macroscopic data sampled from a DSMC simulation contain statistical scatter. • Simulated annealing is evaluated as an optimization algorithm with DSMC. • Proposed method is more robust than a gradient search method. • Objective function uses the mass transfer rate instead of the reaction rate. • Combined algorithm is more efficient than a macroscopic overlay method. - Abstract: Catalysis is a key phenomenon in a great number of energy processes, including feedstock conversion, tar cracking, emission abatement and optimizations of energy use. Within heterogeneous, catalytic nano-scale systems, the chemical reactions typically proceed at very high rates at a gas–solid interface. However, the statistical uncertainties characteristic of molecular processes pose efficiency problems for computational optimizations of such nano-scale systems. The present work investigates the performance of a Direct Simulation Monte Carlo (DSMC) code with a stochastic optimization heuristic for evaluations of an optimal catalyst distribution. The DSMC code treats molecular motion with homogeneous and heterogeneous chemical reactions in wall-bounded systems and algorithms have been devised that allow optimization of the distribution of a catalytically active material within a three-dimensional duct (e.g. a pore). The objective function is the outlet concentration of computational molecules that have interacted with the catalytically active surface, and the optimization method used is simulated annealing. The application of a stochastic optimization heuristic is shown to be more efficient within the present DSMC framework than using a macroscopic overlay method. Furthermore, it is shown that the performance of the developed method is superior to that of a gradient search method for the current class of problems. Finally, the advantages and disadvantages of different types of objective functions are discussed.

  4. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Peisert, Sean [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Davis, CA (United States); Potok, Thomas E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jones, Todd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-03

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues included research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the

  5. Synoptic and meteorological drivers of extreme ozone concentrations over Europe

    Science.gov (United States)

    Otero, Noelia Felipe; Sillmann, Jana; Schnell, Jordan L.; Rust, Henning W.; Butler, Tim

    2016-04-01

    The present work assesses the relationship between local and synoptic meteorological conditions and surface ozone concentration over Europe in spring and summer months, during the period 1998-2012 using a new interpolated data set of observed surface ozone concentrations over the European domain. Along with local meteorological conditions, the influence of large-scale atmospheric circulation on surface ozone is addressed through a set of airflow indices computed with a novel implementation of a grid-by-grid weather type classification across Europe. Drivers of surface ozone over the full distribution of maximum daily 8-hour average values are investigated, along with drivers of the extreme high percentiles and exceedances or air quality guideline thresholds. Three different regression techniques are applied: multiple linear regression to assess the drivers of maximum daily ozone, logistic regression to assess the probability of threshold exceedances and quantile regression to estimate the meteorological influence on extreme values, as represented by the 95th percentile. The relative importance of the input parameters (predictors) is assessed by a backward stepwise regression procedure that allows the identification of the most important predictors in each model. Spatial patterns of model performance exhibit distinct variations between regions. The inclusion of the ozone persistence is particularly relevant over Southern Europe. In general, the best model performance is found over Central Europe, where the maximum temperature plays an important role as a driver of maximum daily ozone as well as its extreme values, especially during warmer months.

  6. Opacity calculations for extreme physical systems: code RACHEL

    Science.gov (United States)

    Drska, Ladislav; Sinor, Milan

    1996-08-01

    Computer simulations of physical systems under extreme conditions (high density, temperature, etc.) require the availability of extensive sets of atomic data. This paper presents basic information on a self-consistent approach to calculations of radiative opacity, one of the key characteristics of such systems. After a short explanation of general concepts of the atomic physics of extreme systems, the structure of the opacity code RACHEL is discussed and some of its applications are presented.

  7. Detecting change-points in extremes

    KAUST Repository

    Dupuis, D. J.

    2015-01-01

    Even though most work on change-point estimation focuses on changes in the mean, changes in the variance or in the tail distribution can lead to more extreme events. In this paper, we develop a new method of detecting and estimating the change-points in the tail of multiple time series data. In addition, we adapt existing tail change-point detection methods to our specific problem and conduct a thorough comparison of different methods in terms of performance on the estimation of change-points and computational time. We also examine three locations on the U.S. northeast coast and demonstrate that the methods are useful for identifying changes in seasonally extreme warm temperatures.

  8. Exploring Asynchronous Many-Task Runtime Systems toward Extreme Scales

    Energy Technology Data Exchange (ETDEWEB)

    Knight, Samuel [O8953; Baker, Gavin Matthew; Gamell, Marc [Rutgers U; Hollman, David [08953; Sjaardema, Gregor [SNL; Kolla, Hemanth [SNL; Teranishi, Keita; Wilke, Jeremiah J; Slattengren, Nicole [SNL; Bennett, Janine Camille

    2015-10-01

    Major exascale computing reports indicate a number of software challenges to meet the dramatic change of system architectures in near future. While several-orders-of-magnitude increase in parallelism is the most commonly cited of those, hurdles also include performance heterogeneity of compute nodes across the system, increased imbalance between computational capacity and I/O capabilities, frequent system interrupts, and complex hardware architectures. Asynchronous task-parallel programming models show a great promise in addressing these issues, but are not yet fully understood nor developed su ciently for computational science and engineering application codes. We address these knowledge gaps through quantitative and qualitative exploration of leading candidate solutions in the context of engineering applications at Sandia. In this poster, we evaluate MiniAero code ported to three leading candidate programming models (Charm++, Legion and UINTAH) to examine the feasibility of these models that permits insertion of new programming model elements into an existing code base.

  9. Extreme Networks' 10-Gigabit Ethernet enables

    CERN Multimedia

    2002-01-01

    " Extreme Networks, Inc.'s 10-Gigabit switching platform enabled researchers to transfer one Terabyte of information from Vancouver to Geneva across a single network hop, the world's first large-scale, end-to-end transfer of its kind" (1/2 page).

  10. Exascale Co-Design Center for Materials in Extreme Environments (ExMatEx) Annual Report - Year 2

    Energy Technology Data Exchange (ETDEWEB)

    Germann, T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Richards, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McPherson, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Belak, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-11-25

    All activities of the Exascale Co-design Center for Materials in Extreme Environments (Ex- MatEx) are focused on the two ultimate goals of the project: (1) demonstrating and delivering a prototype scale-bridging materials science application based upon adaptive physics refinement, and (2) identifying the requirements for the exascale ecosystem that are necessary to perform computational materials science simulations (both single- and multi-scale). During the first year of ExMatEx, our focus was on establishing how we do computational materials science, by developing an initial suite of flexible proxy applications. These “proxy apps” are the primary vehicle for the co-design process, involving assessments and tradeoff evaluations both within the ExMatEx team, and with the entire exascale ecosystem. These interactions have formed the basis of our second year activities. The set of artifacts from these co-design interactions are the lessons learned, that are used to re-express the applications and algorithms within the context of emerging architectures, programming models, and runtime systems.

  11. Extreme hydrometeorological events in the Peruvian Central Andes during austral summer and their relationship with the large-scale circulation

    Science.gov (United States)

    Sulca, Juan C.

    In this Master's dissertation, atmospheric circulation patterns associated with extreme hydrometeorological events in the Mantaro Basin, Peruvian Central Andes, and their teleconnections during the austral summer (December-January-February-March) are addressed. Extreme rainfall events in the Mantaro basin are related to variations of the large-scale circulation as indicated by the changing strength of the Bolivian High-Nordeste Low (BH-NL) system. Dry (wet) spells are associated with a weakening (strengthening) of the BH-NL system and reduced (enhanced) influx of moist air from the lowlands to the east due to strengthened westerly (easterly) wind anomalies at mid- and upper-tropospheric levels. At the same time extreme rainfall events of the opposite sign occur over northeastern Brazil (NEB) due to enhanced (inhibited) convective activity in conjunction with a strengthened (weakened) Nordeste Low. Cold episodes in the Mantaro Basin are grouped in three types: weak, strong and extraordinary cold episodes. Weak and strong cold episodes in the MB are mainly associated with a weakening of the BH-NL system due to tropical-extratropical interactions. Both types of cold episodes are associated with westerly wind anomalies at mid- and upper-tropospheric levels aloft the Peruvian Central Andes, which inhibit the influx of humid air masses from the lowlands to the east and hence limit the potential for development of convective cloud cover. The resulting clear sky conditions cause nighttime temperatures to drop, leading to cold extremes below the 10-percentile. Extraordinary cold episodes in the MB are associated with cold and dry polar air advection at all tropospheric levels toward the central Peruvian Andes. Therefore, weak and strong cold episodes in the MB appear to be caused by radiative cooling associated with reduced cloudiness, rather than cold air advection, while the latter plays an important role for extraordinary cold episodes only.

  12. "Extreme Programming" in a Bioinformatics Class

    Science.gov (United States)

    Kelley, Scott; Alger, Christianna; Deutschman, Douglas

    2009-01-01

    The importance of Bioinformatics tools and methodology in modern biological research underscores the need for robust and effective courses at the college level. This paper describes such a course designed on the principles of cooperative learning based on a computer software industry production model called "Extreme Programming" (EP).…

  13. Two spatial scales in a bleaching event: Corals from the mildest and the most extreme thermal environments escape mortality

    KAUST Repository

    Pineda, Jesús

    2013-07-28

    In summer 2010, a bleaching event decimated the abundant reef flat coral Stylophora pistillata in some areas of the central Red Sea, where a series of coral reefs 100–300 m wide by several kilometers long extends from the coastline to about 20 km offshore. Mortality of corals along the exposed and protected sides of inner (inshore) and mid and outer (offshore) reefs and in situ and satellite sea surface temperatures (SSTs) revealed that the variability in the mortality event corresponded to two spatial scales of temperature variability: 300 m across the reef flat and 20 km across a series of reefs. However, the relationship between coral mortality and habitat thermal severity was opposite at the two scales. SSTs in summer 2010 were similar or increased modestly (0.5°C) in the outer and mid reefs relative to 2009. In the inner reef, 2010 temperatures were 1.4°C above the 2009 seasonal maximum for several weeks. We detected little or no coral mortality in mid and outer reefs. In the inner reef, mortality depended on exposure. Within the inner reef, mortality was modest on the protected (shoreward) side, the most severe thermal environment, with highest overall mean and maximum temperatures. In contrast, acute mortality was observed in the exposed (seaward) side, where temperature fluctuations and upper water temperature values were relatively less extreme. Refuges to thermally induced coral bleaching may include sites where extreme, high-frequency thermal variability may select for coral holobionts preadapted to, and physiologically condition corals to withstand, regional increases in water temperature.

  14. ADVANCING THE FUNDAMENTAL UNDERSTANDING AND SCALE-UP OF TRISO FUEL COATERS VIA ADVANCED MEASUREMENT AND COMPUTATIONAL TECHNIQUES

    Energy Technology Data Exchange (ETDEWEB)

    Biswas, Pratim; Al-Dahhan, Muthanna

    2012-11-01

    Tri-isotropic (TRISO) fuel particle coating is critical for the future use of nuclear energy produced byadvanced gas reactors (AGRs). The fuel kernels are coated using chemical vapor deposition in a spouted fluidized bed. The challenges encountered in operating TRISO fuel coaters are due to the fact that in modern AGRs, such as High Temperature Gas Reactors (HTGRs), the acceptable level of defective/failed coated particles is essentially zero. This specification requires processes that produce coated spherical particles with even coatings having extremely low defect fractions. Unfortunately, the scale-up and design of the current processes and coaters have been based on empirical approaches and are operated as black boxes. Hence, a voluminous amount of experimental development and trial and error work has been conducted. It has been clearly demonstrated that the quality of the coating applied to the fuel kernels is impacted by the hydrodynamics, solids flow field, and flow regime characteristics of the spouted bed coaters, which themselves are influenced by design parameters and operating variables. Further complicating the outlook for future fuel-coating technology and nuclear energy production is the fact that a variety of new concepts will involve fuel kernels of different sizes and with compositions of different densities. Therefore, without a fundamental understanding the underlying phenomena of the spouted bed TRISO coater, a significant amount of effort is required for production of each type of particle with a significant risk of not meeting the specifications. This difficulty will significantly and negatively impact the applications of AGRs for power generation and cause further challenges to them as an alternative source of commercial energy production. Accordingly, the proposed work seeks to overcome such hurdles and advance the scale-up, design, and performance of TRISO fuel particle spouted bed coaters. The overall objectives of the proposed work are

  15. Adaptation to extreme climate events at a regional scale

    OpenAIRE

    Hoffmann, Christin

    2017-01-01

    A significant increase of the frequency, the intensity and the duration of extreme climate events in Switzerland induces the need to find a strategy to deal with the damages they cause. For more than two decades, mitigation has been the main objective of climate policy. However, due to already high atmospheric carbon concentrations and the inertia of the climate system, climate change is unavoidable to some degree, even if today’s emissions were almost completely cut back. Along with the high...

  16. An evaluation of multi-probe locality sensitive hashing for computing similarities over web-scale query logs.

    Directory of Open Access Journals (Sweden)

    Graham Cormode

    Full Text Available Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines, computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH methods and evaluate four variants in a distributed computing environment (specifically, Hadoop. We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.

  17. Extreme learning machines 2013 algorithms and applications

    CERN Document Server

    Toh, Kar-Ann; Romay, Manuel; Mao, Kezhi

    2014-01-01

    In recent years, ELM has emerged as a revolutionary technique of computational intelligence, and has attracted considerable attentions. An extreme learning machine (ELM) is a single layer feed-forward neural network alike learning system, whose connections from the input layer to the hidden layer are randomly generated, while the connections from the hidden layer to the output layer are learned through linear learning methods. The outstanding merits of extreme learning machine (ELM) are its fast learning speed, trivial human intervene and high scalability.   This book contains some selected papers from the International Conference on Extreme Learning Machine 2013, which was held in Beijing China, October 15-17, 2013. This conference aims to bring together the researchers and practitioners of extreme learning machine from a variety of fields including artificial intelligence, biomedical engineering and bioinformatics, system modelling and control, and signal and image processing, to promote research and discu...

  18. Hidden conformal symmetry of extremal black holes

    International Nuclear Information System (INIS)

    Chen Bin; Long Jiang; Zhang Jiaju

    2010-01-01

    We study the hidden conformal symmetry of extremal black holes. We introduce a new set of conformal coordinates to write the SL(2,R) generators. We find that the Laplacian of the scalar field in many extremal black holes, including Kerr(-Newman), Reissner-Nordstrom, warped AdS 3 , and null warped black holes, could be written in terms of the SL(2,R) quadratic Casimir. This suggests that there exist dual conformal field theory (CFT) descriptions of these black holes. From the conformal coordinates, the temperatures of the dual CFTs could be read directly. For the extremal black hole, the Hawking temperature is vanishing. Correspondingly, only the left (right) temperature of the dual CFT is nonvanishing, and the excitations of the other sector are suppressed. In the probe limit, we compute the scattering amplitudes of the scalar off the extremal black holes and find perfect agreement with the CFT prediction.

  19. Extreme events in total ozone over Arosa – Part 1: Application of extreme value theory

    Directory of Open Access Journals (Sweden)

    H. E. Rieder

    2010-10-01

    Full Text Available In this study ideas from extreme value theory are for the first time applied in the field of stratospheric ozone research, because statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values of total ozone data do not adequately address the structure of the extremes. We show that statistical extreme value methods are appropriate to identify ozone extremes and to describe the tails of the Arosa (Switzerland total ozone time series. In order to accommodate the seasonal cycle in total ozone, a daily moving threshold was determined and used, with tools from extreme value theory, to analyse the frequency of days with extreme low (termed ELOs and high (termed EHOs total ozone at Arosa. The analysis shows that the Generalized Pareto Distribution (GPD provides an appropriate model for the frequency distribution of total ozone above or below a mathematically well-defined threshold, thus providing a statistical description of ELOs and EHOs. The results show an increase in ELOs and a decrease in EHOs during the last decades. The fitted model represents the tails of the total ozone data set with high accuracy over the entire range (including absolute monthly minima and maxima, and enables a precise computation of the frequency distribution of ozone mini-holes (using constant thresholds. Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight into the time series properties. Fingerprints of dynamical (e.g. ENSO, NAO and chemical features (e.g. strong polar vortex ozone loss, and major volcanic eruptions, can be identified in the observed frequency of extreme events throughout the time series. Overall the new approach to analysis of extremes provides more information on time series properties and variability than previous approaches that use only monthly averages and/or mini-holes and mini-highs.

  20. Nonlinear power spectrum from resummed perturbation theory: a leap beyond the BAO scale

    International Nuclear Information System (INIS)

    Anselmi, Stefano; Pietroni, Massimo

    2012-01-01

    A new computational scheme for the nonlinear cosmological matter power spectrum (PS) is presented. Our method is based on evolution equations in time, which can be cast in a form extremely convenient for fast numerical evaluations. A nonlinear PS is obtained in a time comparable to that needed for a simple 1-loop computation, and the numerical implementation is very simple. Our results agree with N-body simulations at the percent level in the BAO range of scales, and at the few-percent level up to k ≅ 1 h/Mpc at z∼>0.5, thereby opening the possibility of applying this tool to scales interesting for weak lensing. We clarify the approximations inherent to this approach as well as its relations to previous ones, such as the Time Renormalization Group, and the multi-point propagator expansion. We discuss possible lines of improvements of the method and its intrinsic limitations by multi streaming at small scales and low redshifts

  1. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    Science.gov (United States)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  2. The structure and large-scale organization of extreme cold waves over the conterminous United States

    Science.gov (United States)

    Xie, Zuowei; Black, Robert X.; Deng, Yi

    2017-12-01

    Extreme cold waves (ECWs) occurring over the conterminous United States (US) are studied through a systematic identification and documentation of their local synoptic structures, associated large-scale meteorological patterns (LMPs), and forcing mechanisms external to the US. Focusing on the boreal cool season (November-March) for 1950‒2005, a hierarchical cluster analysis identifies three ECW patterns, respectively characterized by cold surface air temperature anomalies over the upper midwest (UM), northwestern (NW), and southeastern (SE) US. Locally, ECWs are synoptically organized by anomalous high pressure and northerly flow. At larger scales, the UM LMP features a zonal dipole in the mid-tropospheric height field over North America, while the NW and SE LMPs each include a zonal wave train extending from the North Pacific across North America into the North Atlantic. The Community Climate System Model version 4 (CCSM4) in general simulates the three ECW patterns quite well and successfully reproduces the observed enhancements in the frequency of their associated LMPs. La Niña and the cool phase of the Pacific Decadal Oscillation (PDO) favor the occurrence of NW ECWs, while the warm PDO phase, low Arctic sea ice extent and high Eurasian snow cover extent (SCE) are associated with elevated SE-ECW frequency. Additionally, high Eurasian SCE is linked to increases in the occurrence likelihood of UM ECWs.

  3. Multi-scale data visualization for computational astrophysics and climate dynamics at Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Ahern, Sean; Daniel, Jamison R; Gao, Jinzhu; Ostrouchov, George; Toedte, Ross J; Wang, Chaoli

    2006-01-01

    Computational astrophysics and climate dynamics are two principal application foci at the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL). We identify a dataset frontier that is shared by several SciDAC computational science domains and present an exploration of traditional production visualization techniques enhanced with new enabling research technologies such as advanced parallel occlusion culling and high resolution small multiples statistical analysis. In collaboration with our research partners, these techniques will allow the visual exploration of a new generation of peta-scale datasets that cross this data frontier along all axes

  4. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    International Nuclear Information System (INIS)

    Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B

    2013-01-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)

  5. Vulnerability assessment of Central-East Sardinia (Italy to extreme rainfall events

    Directory of Open Access Journals (Sweden)

    A. Bodini

    2010-01-01

    Full Text Available In Sardinia (Italy, the highest frequency of extreme events is recorded in the Central-East area (3–4 events per year. The presence of high and steep mountains near the sea on the central and south-eastern coast, causes an East-West precipitation gradient in autumn especially, due to hot and moist currents coming from Africa. Soil structure and utilization make this area highly vulnerable to flash flooding and landslides. The specific purpose of this work is to provide a description of the heavy rainfall phenomenon on a statistical basis. The analysis mainly focuses on i the existence of trends in heavy rainfall and ii the characterization of the distribution of extreme events. First, to study possible trends in extreme events a few indices have been analyzed by the linear regression test. The analysis has been carried out at annual and seasonal scales. Then, extreme values analysis has been carried out by fitting a Generalized Pareto Distribution (GPD to the data. As far as trends are concerned, different results are obtained at the two temporal scales: significant trends are obtained at the seasonal scale which are masked at the annual scale. By combining trend analysis and GPD analysis, the vulnerability of the study area to the occurrence of heavy rainfall has been characterized. Therefore, this work might support the improvement of land use planning and the application of suitable prevention systems. Future work will consider the extension of the analysis to all Sardinia and the application of statistical methods taking into account the spatial correlation of extreme events.

  6. Diagnosing causes of extreme aerosol optical depth events

    Science.gov (United States)

    Bernstein, D. N.; Sullivan, R.; Crippa, P.; Thota, A.; Pryor, S. C.

    2017-12-01

    Aerosol burdens and optical properties exhibit substantial spatiotemporal variability, and simulation of current and possible future aerosol burdens and characteristics exhibits relatively high uncertainty due to uncertainties in emission estimates and in chemical and physical processes associated with aerosol formation, dynamics and removal. We report research designed to improve understanding of the causes and characteristics of extreme aerosol optical depth (AOD) at the regional scale, and diagnose and attribute model skill in simulating these events. Extreme AOD events over the US Midwest are selected by identifying all dates on which AOD in a MERRA-2 reanalysis grid cell exceeds the local seasonally computed 90th percentile (p90) value during 2004-2016 and then finding the dates on which the highest number of grid cells exceed their local p90. MODIS AOD data are subsequently used to exclude events dominated by wildfires. MERRA-2 data are also analyzed within a synoptic classification to determine in what ways the extreme AOD events are atypical and to identify possible meteorological `finger-prints' that can be detected in regional climate model simulations of future climate states to project possible changes in the occurrence of extreme AOD. Then WRF-Chem v3.6 is applied at 12-km resolution and regridded to the MERRA-2 resolution over eastern North America to quantify model performance, and also evaluated using in situ measurements of columnar AOD (AERONET) and near-surface PM2.5 (US EPA). Finally the sensitivity to (i) spin-up time (including procedure used to spin-up the chemistry), (ii) modal versus sectional aerosol schemes, (iii) meteorological nudging, (iv) chemistry initial and boundary conditions, and (v) anthropogenic emissions is quantified. Despite recent declines in mean AOD, supraregional (> 1000 km) extreme AOD events continue to occur. During these events AOD exceeds 0.6 in many Midwestern grid cells for multiple consecutive days. In all

  7. The Spatial Scaling of Global Rainfall Extremes

    Science.gov (United States)

    Devineni, N.; Xi, C.; Lall, U.; Rahill-Marier, B.

    2013-12-01

    Floods associated with severe storms are a significant source of risk for property, life and supply chains. These property losses tend to be determined as much by the duration of flooding as by the depth and velocity of inundation. High duration floods are typically induced by persistent rainfall (upto 30 day duration) as seen recently in Thailand, Pakistan, the Ohio and the Mississippi Rivers, France, and Germany. Events related to persistent and recurrent rainfall appear to correspond to the persistence of specific global climate patterns that may be identifiable from global, historical data fields, and also from climate models that project future conditions. A clear understanding of the space-time rainfall patterns for events or for a season will enable in assessing the spatial distribution of areas likely to have a high/low inundation potential for each type of rainfall forcing. In this paper, we investigate the statistical properties of the spatial manifestation of the rainfall exceedances. We also investigate the connection of persistent rainfall events at different latitudinal bands to large-scale climate phenomena such as ENSO. Finally, we present the scaling phenomena of contiguous flooded areas as a result of large scale organization of long duration rainfall events. This can be used for spatially distributed flood risk assessment conditional on a particular rainfall scenario. Statistical models for spatio-temporal loss simulation including model uncertainty to support regional and portfolio analysis can be developed.

  8. Structural extremes in a cretaceous dinosaur.

    Directory of Open Access Journals (Sweden)

    Paul C Sereno

    Full Text Available Fossils of the Early Cretaceous dinosaur, Nigersaurus taqueti, document for the first time the cranial anatomy of a rebbachisaurid sauropod. Its extreme adaptations for herbivory at ground-level challenge current hypotheses regarding feeding function and feeding strategy among diplodocoids, the larger clade of sauropods that includes Nigersaurus. We used high resolution computed tomography, stereolithography, and standard molding and casting techniques to reassemble the extremely fragile skull. Computed tomography also allowed us to render the first endocast for a sauropod preserving portions of the olfactory bulbs, cerebrum and inner ear, the latter permitting us to establish habitual head posture. To elucidate evidence of tooth wear and tooth replacement rate, we used photographic-casting techniques and crown thin sections, respectively. To reconstruct its 9-meter postcranial skeleton, we combined and size-adjusted multiple partial skeletons. Finally, we used maximum parsimony algorithms on character data to obtain the best estimate of phylogenetic relationships among diplodocoid sauropods. Nigersaurus taqueti shows extreme adaptations for a dinosaurian herbivore including a skull of extremely light construction, tooth batteries located at the distal end of the jaws, tooth replacement as fast as one per month, an expanded muzzle that faces directly toward the ground, and hollow presacral vertebral centra with more air sac space than bone by volume. A cranial endocast provides the first reasonably complete view of a sauropod brain including its small olfactory bulbs and cerebrum. Skeletal and dental evidence suggests that Nigersaurus was a ground-level herbivore that gathered and sliced relatively soft vegetation, the culmination of a low-browsing feeding strategy first established among diplodocoids during the Jurassic.

  9. The nonstationary impact of local temperature changes and ENSO on extreme precipitation at the global scale

    Science.gov (United States)

    Sun, Qiaohong; Miao, Chiyuan; Qiao, Yuanyuan; Duan, Qingyun

    2017-12-01

    The El Niño-Southern Oscillation (ENSO) and local temperature are important drivers of extreme precipitation. Understanding the impact of ENSO and temperature on the risk of extreme precipitation over global land will provide a foundation for risk assessment and climate-adaptive design of infrastructure in a changing climate. In this study, nonstationary generalized extreme value distributions were used to model extreme precipitation over global land for the period 1979-2015, with ENSO indicator and temperature as covariates. Risk factors were estimated to quantify the contrast between the influence of different ENSO phases and temperature. The results show that extreme precipitation is dominated by ENSO over 22% of global land and by temperature over 26% of global land. With a warming climate, the risk of high-intensity daily extreme precipitation increases at high latitudes but decreases in tropical regions. For ENSO, large parts of North America, southern South America, and southeastern and northeastern China are shown to suffer greater risk in El Niño years, with more than double the chance of intense extreme precipitation in El Niño years compared with La Niña years. Moreover, regions with more intense precipitation are more sensitive to ENSO. Global climate models were used to investigate the changing relationship between extreme precipitation and the covariates. The risk of extreme, high-intensity precipitation increases across high latitudes of the Northern Hemisphere but decreases in middle and lower latitudes under a warming climate scenario, and will likely trigger increases in severe flooding and droughts across the globe. However, there is some uncertainties associated with the influence of ENSO on predictions of future extreme precipitation, with the spatial extent and risk varying among the different models.

  10. Off the scale: a new species of fish-scale gecko (Squamata: Gekkonidae: Geckolepis with exceptionally large scales

    Directory of Open Access Journals (Sweden)

    Mark D. Scherz

    2017-02-01

    Full Text Available The gecko genus Geckolepis, endemic to Madagascar and the Comoro archipelago, is taxonomically challenging. One reason is its members ability to autotomize a large portion of their scales when grasped or touched, most likely to escape predation. Based on an integrative taxonomic approach including external morphology, morphometrics, genetics, pholidosis, and osteology, we here describe the first new species from this genus in 75 years: Geckolepis megalepis sp. nov. from the limestone karst of Ankarana in northern Madagascar. The new species has the largest known body scales of any gecko (both relatively and absolutely, which come off with exceptional ease. We provide a detailed description of the skeleton of the genus Geckolepis based on micro-Computed Tomography (micro-CT analysis of the new species, the holotype of G. maculata, the recently resurrected G. humbloti, and a specimen belonging to an operational taxonomic unit (OTU recently suggested to represent G. maculata. Geckolepis is characterized by highly mineralized, imbricated scales, paired frontals, and unfused subolfactory processes of the frontals, among other features. We identify diagnostic characters in the osteology of these geckos that help define our new species and show that the OTU assigned to G. maculata is probably not conspecific with it, leaving the taxonomic identity of this species unclear. We discuss possible reasons for the extremely enlarged scales of G. megalepis in the context of an anti-predator defence mechanism, and the future of Geckolepis taxonomy.

  11. Extreme hydronephrosis due to uretropelvic junction obstruction in infant (case report).

    Science.gov (United States)

    Krzemień, Grażyna; Szmigielska, Agnieszka; Bombiński, Przemysław; Barczuk, Marzena; Biejat, Agnieszka; Warchoł, Stanisław; Dudek-Warchoł, Teresa

    2016-01-01

    Hydronephrosis is the one of the most common congenital abnormalities of urinary tract. The left kidney is more commonly affected than the right side and is more common in males. To determine the role of ultrasonography, renal dynamic scintigraphy and lowerdose computed tomography urography in preoperative diagnostic workup of infant with extreme hydronephrosis. We presented the boy with antenatally diagnosed hydronephrosis. In serial, postnatal ultrasonography, renal scintigraphy and computed tomography urography we observed slightly declining function in the dilated kidney and increasing pelvic dilatation. Pyeloplasty was performed at the age of four months with good result. Results of ultrasonography and renal dynamic scintigraphy in child with extreme hydronephrosis can be difficult to asses, therefore before the surgical procedure a lower-dose computed tomography urography should be performed.

  12. A computationally inexpensive CFD approach for small-scale biomass burners equipped with enhanced air staging

    International Nuclear Information System (INIS)

    Buchmayr, M.; Gruber, J.; Hargassner, M.; Hochenauer, C.

    2016-01-01

    Highlights: • Time efficient CFD model to predict biomass boiler performance. • Boundary conditions for numerical modeling are provided by measurements. • Tars in the product from primary combustion was considered. • Simulation results were validated by experiments on a real-scale reactor. • Very good accordance between experimental and simulation results. - Abstract: Computational Fluid Dynamics (CFD) is an upcoming technique for optimization and as a part of the design process of biomass combustion systems. An accurate simulation of biomass combustion can only be provided with high computational effort so far. This work presents an accurate, time efficient CFD approach for small-scale biomass combustion systems equipped with enhanced air staging. The model can handle the high amount of biomass tars in the primary combustion product at very low primary air ratios. Gas-phase combustion in the freeboard was performed by the Steady Flamelet Model (SFM) together with a detailed heptane combustion mechanism. The advantage of the SFM is that complex combustion chemistry can be taken into account at low computational effort because only two additional transport equations have to be solved to describe the chemistry in the reacting flow. Boundary conditions for primary combustion product composition were obtained from the fuel bed by experiments. The fuel bed data were used as fuel inlet boundary condition for the gas-phase combustion model. The numerical and experimental investigations were performed for different operating conditions and varying wood-chip moisture on a special designed real-scale reactor. The numerical predictions were validated with experimental results and a very good agreement was found. With the presented approach accurate results can be provided within 24 h using a standard Central Processing Unit (CPU) consisting of six cores. Case studies e.g. for combustion geometry improvement can be realized effectively due to the short calculation

  13. Extreme Weather and Climate: Workshop Report

    Science.gov (United States)

    Sobel, Adam; Camargo, Suzana; Debucquoy, Wim; Deodatis, George; Gerrard, Michael; Hall, Timothy; Hallman, Robert; Keenan, Jesse; Lall, Upmanu; Levy, Marc; hide

    2016-01-01

    Extreme events are the aspects of climate to which human society is most sensitive. Due to both their severity and their rarity, extreme events can challenge the capacity of physical, social, economic and political infrastructures, turning natural events into human disasters. Yet, because they are low frequency events, the science of extreme events is very challenging. Among the challenges is the difficulty of connecting extreme events to longer-term, large-scale variability and trends in the climate system, including anthropogenic climate change. How can we best quantify the risks posed by extreme weather events, both in the current climate and in the warmer and different climates to come? How can we better predict them? What can we do to reduce the harm done by such events? In response to these questions, the Initiative on Extreme Weather and Climate has been created at Columbia University in New York City (extreme weather.columbia.edu). This Initiative is a University-wide activity focused on understanding the risks to human life, property, infrastructure, communities, institutions, ecosystems, and landscapes from extreme weather events, both in the present and future climates, and on developing solutions to mitigate those risks. In May 2015,the Initiative held its first science workshop, entitled Extreme Weather and Climate: Hazards, Impacts, Actions. The purpose of the workshop was to define the scope of the Initiative and tremendously broad intellectual footprint of the topic indicated by the titles of the presentations (see Table 1). The intent of the workshop was to stimulate thought across disciplinary lines by juxtaposing talks whose subjects differed dramatically. Each session concluded with question and answer panel sessions. Approximately, 150 people were in attendance throughout the day. Below is a brief synopsis of each presentation. The synopses collectively reflect the variety and richness of the emerging extreme event research agenda.

  14. Models and Inference for Multivariate Spatial Extremes

    KAUST Repository

    Vettori, Sabrina

    2017-12-07

    The development of flexible and interpretable statistical methods is necessary in order to provide appropriate risk assessment measures for extreme events and natural disasters. In this thesis, we address this challenge by contributing to the developing research field of Extreme-Value Theory. We initially study the performance of existing parametric and non-parametric estimators of extremal dependence for multivariate maxima. As the dimensionality increases, non-parametric estimators are more flexible than parametric methods but present some loss in efficiency that we quantify under various scenarios. We introduce a statistical tool which imposes the required shape constraints on non-parametric estimators in high dimensions, significantly improving their performance. Furthermore, by embedding the tree-based max-stable nested logistic distribution in the Bayesian framework, we develop a statistical algorithm that identifies the most likely tree structures representing the data\\'s extremal dependence using the reversible jump Monte Carlo Markov Chain method. A mixture of these trees is then used for uncertainty assessment in prediction through Bayesian model averaging. The computational complexity of full likelihood inference is significantly decreased by deriving a recursive formula for the nested logistic model likelihood. The algorithm performance is verified through simulation experiments which also compare different likelihood procedures. Finally, we extend the nested logistic representation to the spatial framework in order to jointly model multivariate variables collected across a spatial region. This situation emerges often in environmental applications but is not often considered in the current literature. Simulation experiments show that the new class of multivariate max-stable processes is able to detect both the cross and inner spatial dependence of a number of extreme variables at a relatively low computational cost, thanks to its Bayesian hierarchical

  15. Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors.

    Science.gov (United States)

    Hines, Michael L; Eichner, Hubert; Schürmann, Felix

    2008-08-01

    Neuron tree topology equations can be split into two subtrees and solved on different processors with no change in accuracy, stability, or computational effort; communication costs involve only sending and receiving two double precision values by each subtree at each time step. Splitting cells is useful in attaining load balance in neural network simulations, especially when there is a wide range of cell sizes and the number of cells is about the same as the number of processors. For compute-bound simulations load balance results in almost ideal runtime scaling. Application of the cell splitting method to two published network models exhibits good runtime scaling on twice as many processors as could be effectively used with whole-cell balancing.

  16. Evaluation of empirical relationships between extreme rainfall and daily maximum temperature in Australia

    Science.gov (United States)

    Herath, Sujeewa Malwila; Sarukkalige, Ranjan; Nguyen, Van Thanh Van

    2018-01-01

    Understanding the relationships between extreme daily and sub-daily rainfall events and their governing factors is important in order to analyse the properties of extreme rainfall events in a changing climate. Atmospheric temperature is one of the dominant climate variables which has a strong relationship with extreme rainfall events. In this study, a temperature-rainfall binning technique is used to evaluate the dependency of extreme rainfall on daily maximum temperature. The Clausius-Clapeyron (C-C) relation was found to describe the relationship between daily maximum temperature and a range of rainfall durations from 6 min up to 24 h for seven Australian weather stations, the stations being located in Adelaide, Brisbane, Canberra, Darwin, Melbourne, Perth and Sydney. The analysis shows that the rainfall - temperature scaling varies with location, temperature and rainfall duration. The Darwin Airport station shows a negative scaling relationship, while the other six stations show a positive relationship. To identify the trend in scaling relationship over time the same analysis is conducted using data covering 10 year periods. Results indicate that the dependency of extreme rainfall on temperature also varies with the analysis period. Further, this dependency shows an increasing trend for more extreme short duration rainfall and a decreasing trend for average long duration rainfall events at most stations. Seasonal variations of the scale changing trends were analysed by categorizing the summer and autumn seasons in one group and the winter and spring seasons in another group. Most of 99th percentile of 6 min, 1 h and 24 h rain durations at Perth, Melbourne and Sydney stations show increasing trend for both groups while Adelaide and Darwin show decreasing trend. Furthermore, majority of scaling trend of 50th percentile are decreasing for both groups.

  17. Satellite-Enhanced Dynamical Downscaling of Extreme Events

    Science.gov (United States)

    Nunes, A.

    2015-12-01

    Severe weather events can be the triggers of environmental disasters in regions particularly susceptible to changes in hydrometeorological conditions. In that regard, the reconstruction of past extreme weather events can help in the assessment of vulnerability and risk mitigation actions. Using novel modeling approaches, dynamical downscaling of long-term integrations from global circulation models can be useful for risk analysis, providing more accurate climate information at regional scales. Originally developed at the National Centers for Environmental Prediction (NCEP), the Regional Spectral Model (RSM) is being used in the dynamical downscaling of global reanalysis, within the South American Hydroclimate Reconstruction Project. Here, RSM combines scale-selective bias correction with assimilation of satellite-based precipitation estimates to downscale extreme weather occurrences. Scale-selective bias correction is a method employed in the downscaling, similar to the spectral nudging technique, in which the downscaled solution develops in agreement with its coarse boundaries. Precipitation assimilation acts on modeled deep-convection, drives the land-surface variables, and therefore the hydrological cycle. During the downscaling of extreme events that took place in Brazil in recent years, RSM continuously assimilated NCEP Climate Prediction Center morphing technique precipitation rates. As a result, RSM performed better than its global (reanalysis) forcing, showing more consistent hydrometeorological fields compared with more sophisticated global reanalyses. Ultimately, RSM analyses might provide better-quality initial conditions for high-resolution numerical predictions in metropolitan areas, leading to more reliable short-term forecasting of severe local storms.

  18. Large-Scale Skin Resurfacing of the Upper Extremity in Pediatric Patients Using a Pre-Expanded Intercostal Artery Perforator Flap.

    Science.gov (United States)

    Wei, Jiao; Herrler, Tanja; Gu, Bin; Yang, Mei; Li, Qingfeng; Dai, Chuanchang; Xie, Feng

    2018-05-01

    The repair of extensive upper limb skin lesions in pediatric patients is extremely challenging due to substantial limitations of flap size and donor-site morbidity. We aimed to create an oversize preexpanded flap based on intercostal artery perforators for large-scale resurfacing of the upper extremity in children. Between March 2013 and August 2016, 11 patients underwent reconstructive treatment for extensive skin lesions in the upper extremity using a preexpanded intercostal artery perforator flap. Preoperatively, 2 to 4 candidate perforators were selected as potential pedicle vessels based on duplex ultrasound examination. After tissue expander implantation in the thoracodorsal area, regular saline injections were performed until the expanded flap was sufficient in size. Then, a pedicled flap was formed to resurface the skin lesion of the upper limb. The pedicles were transected 3 weeks after flap transfer. Flap survival, complications, and long-term outcome were evaluated. The average time of tissue expansion was 133 days with a mean final volume of 1713 mL. The thoracoabdominal flaps were based on 2 to 6 pedicles and used to resurface a mean skin defect area of 238 cm ranging from 180 to 357 cm. In all cases, primary donor-site closure was achieved. Marginal necrosis was seen in 5 cases. The reconstructed limbs showed satisfactory outcome in both aesthetic and functional aspects. The preexpanded intercostal artery perforator flap enables 1-block repair of extensive upper limb skin lesions. Due to limited donor-site morbidity and a pedicled technique, this resurfacing approach represents a useful tool especially in pediatric patients.

  19. Heavy Tail Behavior of Rainfall Extremes across Germany

    Science.gov (United States)

    Castellarin, A.; Kreibich, H.; Vorogushyn, S.; Merz, B.

    2017-12-01

    Distributions are termed heavy-tailed if extreme values are more likely than would be predicted by probability distributions that have exponential asymptotic behavior. Heavy-tail behavior often leads to surprise, because historical observations can be a poor guide for the future. Heavy-tail behavior seems to be widespread for hydro-meteorological extremes, such as extreme rainfall and flood events. To date there have been only vague hints to explain under which conditions these extremes show heavy-tail behavior. We use an observational data set consisting of 11 climate variables at 1440 stations across Germany. This homogenized, gap-free data set covers 110 years (1901-2010) at daily resolution. We estimate the upper tail behavior, including its uncertainty interval, of daily precipitation extremes for the 1,440 stations at the annual and seasonal time scales. Different tail indicators are tested, including the shape parameter of the Generalized Extreme Value distribution, the upper tail ratio and the obesity index. In a further step, we explore to which extent the tail behavior can be explained by geographical and climate factors. A large number of characteristics is derived, such as station elevation, degree of continentality, aridity, measures for quantifying the variability of humidity and wind velocity, or event-triggering large-scale atmospheric situation. The link between the upper tail behavior and these characteristics is investigated via data mining methods capable of detecting non-linear relationships in large data sets. This exceptionally rich observational data set, in terms of number of stations, length of time series and number of explaining variables, allows insights into the upper tail behavior which is rarely possible given the typical observational data sets available.

  20. Large Scale Beam-beam Simulations for the CERN LHC using Distributed Computing

    CERN Document Server

    Herr, Werner; McIntosh, E; Schmidt, F

    2006-01-01

    We report on a large scale simulation of beam-beam effects for the CERN Large Hadron Collider (LHC). The stability of particles which experience head-on and long-range beam-beam effects was investigated for different optical configurations and machine imperfections. To cover the interesting parameter space required computing resources not available at CERN. The necessary resources were available in the LHC@home project, based on the BOINC platform. At present, this project makes more than 60000 hosts available for distributed computing. We shall discuss our experience using this system during a simulation campaign of more than six months and describe the tools and procedures necessary to ensure consistent results. The results from this extended study are presented and future plans are discussed.

  1. Microwave tomography for functional imaging of extremity soft tissues: feasibility assessment

    International Nuclear Information System (INIS)

    Semenov, Serguei; Kellam, James; Althausen, Peter; Williams, Thomas; Abubakar, Aria; Bulyshev, Alexander; Sizov, Yuri

    2007-01-01

    It is important to assess the viability of extremity soft tissues, as this component is often the determinant of the final outcome of fracture treatment. Microwave tomography (MWT) and sensing might be able to provide a fast and mobile assessment of such properties. MWT imaging of extremities possesses a complicated, nonlinear, high dielectric contrast inverse problem of diffraction tomography. There is a high dielectric contrast between bone and soft tissue in the extremities. A contrast between soft tissue abnormalities is less pronounced when compared with the high bone-soft tissue contrast. The goal of this study was to assess the feasibility of MWT for functional imaging of extremity soft tissues, i.e. to detect a relatively small contrast within soft tissues in closer proximity to high contrast boney areas. Both experimental studies and computer simulation were performed. Experiments were conducted using live pigs with compromised blood flow and compartment syndrome within an extremity. A whole 2D tomographic imaging cycle at 1 GHz was computer simulated and images were reconstructed using the Newton, MR-CSI and modified Born methods. Results of experimental studies demonstrate that microwave technology is sensitive to changes in the soft tissue blood content and elevated compartment pressure. It was demonstrated that MWT is feasible for functional imaging of extremity soft tissues, circulatory-related changes, blood flow and elevated compartment pressure

  2. Null infinity and extremal horizons in AdS-CFT

    International Nuclear Information System (INIS)

    Hickling, Andrew; Wiseman, Toby; Lucietti, James

    2015-01-01

    We consider AdS gravity duals to CFT on background spacetimes with a null infinity. Null infinity on the conformal boundary may extend to an extremal horizon in the bulk. For example it does so for Poincaré–AdS, although does not for planar Schwarzschild–AdS. If null infinity does extend into an extremal horizon in the bulk, we show that the bulk near-horizon geometry is determined by the geometry of the boundary null infinity. Hence the ‘infra-red’ geometry of the bulk is fixed by the large scale behaviour of the CFT spacetime. In addition the boundary stress tensor must have a particular decay at null infinity. As an application, we argue that for CFT on asymptotically flat backgrounds, any static bulk dual containing an extremal horizon extending from the boundary null infinity, must have the near-horizon geometry of Poincaré–AdS. We also discuss a class of boundary null infinity that cannot extend to a bulk extremal horizon, although we give evidence that they can extend to an analogous null surface in the bulk which possesses an associated scale-invariant ‘near-geometry’. (paper)

  3. Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.

  4. Assessing changes in extreme convective precipitation from a damage perspective

    Science.gov (United States)

    Schroeer, K.; Tye, M. R.

    2016-12-01

    Projected increases in high-intensity short-duration convective precipitation are expected even in regions that are likely to become more arid. Such high intensity precipitation events can trigger hazardous flash floods, debris flows and landslides that put people and local assets at risk. However, the assessment of local scale precipitation extremes is hampered by its high spatial and temporal variability. In addition to which, not only are extreme events rare, but such small scale events are likely to be underreported where they don't coincide with the observation network. Rather than focus solely on the convective precipitation, understanding the characteristics of these extremes which drive damage may be more effective to assess future risks. Two sources of data are used in this study. First, sub-daily precipitation observations over the Southern Alps enable an examination of seasonal and regional patterns in high-intensity convective precipitation and their relationship with weather types. Secondly, reports of private loss and damage on a household scale are used to identify which events are most damaging, or what conditions potentially enhance the vulnerability to these extremes.This study explores the potential added value from including recorded loss and damage data to understand the risks from summertime convective precipitation events. By relating precipitation generating weather types to the severity of damage we hope to develop a mechanism to assess future risks. A further benefit would be to identify from damage reports the likely occurrence of precipitation extremes where no direct observations are available and use this information to validate remotely sensed observations.

  5. The Contribution of Extreme Precipitation to the Total Precipitation in China

    Institute of Scientific and Technical Information of China (English)

    SUN Jian-Qi

    2012-01-01

    Using daily precipitation data from weather stations in China, the variations in the contribution of extreme precipitation to the total precipitation are analyzed. It is found that extreme precipitation accounts for approximately one third of the total precipitation based on the overall mean for China. Over the past half century, extreme precipitation has played a dominant role in the year-to-year variability of the total precipitation. On the decadal time scale, the extreme precipitation makes different contributions to the wetting and drying regions of China. The wetting trends of particular regions are mainly attributed to increases in extreme precipitation; in contrast, the drying trends of other regions are mainly due to decreases in non-extreme precipitation.

  6. Flux scaling: Ultimate regime

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Flux scaling: Ultimate regime. With the Nusselt number and the mixing length scales, we get the Nusselt number and Reynolds number (w'd/ν) scalings: and or. and. scaling expected to occur at extremely high Ra Rayleigh-Benard convection. Get the ultimate regime ...

  7. Regional climate change trends and uncertainty analysis using extreme indices: A case study of Hamilton, Canada

    OpenAIRE

    Razavi, Tara; Switzman, Harris; Arain, Altaf; Coulibaly, Paulin

    2016-01-01

    This study aims to provide a deeper understanding of the level of uncertainty associated with the development of extreme weather frequency and intensity indices at the local scale. Several different global climate models, downscaling methods, and emission scenarios were used to develop extreme temperature and precipitation indices at the local scale in the Hamilton region, Ontario, Canada. Uncertainty associated with historical and future trends in extreme indices and future climate projectio...

  8. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    Science.gov (United States)

    Dong, Xianlei; Bollen, Johan

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  9. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    Directory of Open Access Journals (Sweden)

    Xianlei Dong

    Full Text Available Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  10. Spatial extreme learning machines: An application on prediction of disease counts.

    Science.gov (United States)

    Prates, Marcos O

    2018-01-01

    Extreme learning machines have gained a lot of attention by the machine learning community because of its interesting properties and computational advantages. With the increase in collection of information nowadays, many sources of data have missing information making statistical analysis harder or unfeasible. In this paper, we present a new model, coined spatial extreme learning machine, that combine spatial modeling with extreme learning machines keeping the nice properties of both methodologies and making it very flexible and robust. As explained throughout the text, the spatial extreme learning machines have many advantages in comparison with the traditional extreme learning machines. By a simulation study and a real data analysis we present how the spatial extreme learning machine can be used to improve imputation of missing data and uncertainty prediction estimation.

  11. Open Problems in Network-aware Data Management in Exa-scale Computing and Terabit Networking Era

    Energy Technology Data Exchange (ETDEWEB)

    Balman, Mehmet; Byna, Surendra

    2011-12-06

    Accessing and managing large amounts of data is a great challenge in collaborative computing environments where resources and users are geographically distributed. Recent advances in network technology led to next-generation high-performance networks, allowing high-bandwidth connectivity. Efficient use of the network infrastructure is necessary in order to address the increasing data and compute requirements of large-scale applications. We discuss several open problems, evaluate emerging trends, and articulate our perspectives in network-aware data management.

  12. Scaling a Survey Course in Extreme Weather

    Science.gov (United States)

    Samson, P. J.

    2013-12-01

    "Extreme Weather" is a survey-level course offered at the University of Michigan that is broadcast via the web and serves as a research testbed to explore best practices for large class conduct. The course has led to the creation of LectureTools, a web-based student response and note-taking system that has been shown to increase student engagement dramatically in multiple courses by giving students more opportunities to participate in class. Included in this is the capacity to pose image-based questions (see image where question was "Where would you expect winds from the south") as well as multiple choice, ordered list, free response and numerical questions. Research in this class has also explored differences in learning outcomes from those who participate remotely versus those who physically come to class and found little difference. Moreover the technologies used allow instructors to conduct class from wherever they are while the students can still answer questions and engage in class discussion from wherever they are. This presentation will use LectureTools to demonstrate its features. Attendees are encouraged to bring a mobile device to the session to participate.

  13. A new way of estimating compute-boundedness and its application to dynamic voltage scaling

    DEFF Research Database (Denmark)

    Venkatachalam, Vasanth; Franz, Michael; Probst, Christian W.

    2007-01-01

    Many dynamic voltage scaling algorithms rely on measuring hardware events (such as cache misses) for predicting how much a workload can be slowed down with acceptable performance loss. The events measured, however, are at best indirectly related to execution time and clock frequency. By relating...... these two indicators logically, we propose a new way of predicting a workload's compute-boundedness that is based on direct observation, and only requires measuring the total execution cycles for the two highest clock frequencies. Our predictor can be used to develop dynamic voltage scaling algorithms...

  14. Electronic cleansing for computed tomography (CT) colonography using a scale-invariant three-material model

    NARCIS (Netherlands)

    Serlie, Iwo W. O.; Vos, Frans M.; Truyen, Roel; Post, Frits H.; Stoker, Jaap; van Vliet, Lucas J.

    2010-01-01

    A well-known reading pitfall in computed tomography (CT) colonography is posed by artifacts at T-junctions, i.e., locations where air-fluid levels interface with the colon wall. This paper presents a scale-invariant method to determine material fractions in voxels near such T-junctions. The proposed

  15. Multi-scale computation methods: Their applications in lithium-ion battery research and development

    International Nuclear Information System (INIS)

    Shi Siqi; Zhao Yan; Wu Qu; Gao Jian; Liu Yue; Ju Wangwei; Ouyang Chuying; Xiao Ruijuan

    2016-01-01

    Based upon advances in theoretical algorithms, modeling and simulations, and computer technologies, the rational design of materials, cells, devices, and packs in the field of lithium-ion batteries is being realized incrementally and will at some point trigger a paradigm revolution by combining calculations and experiments linked by a big shared database, enabling accelerated development of the whole industrial chain. Theory and multi-scale modeling and simulation, as supplements to experimental efforts, can help greatly to close some of the current experimental and technological gaps, as well as predict path-independent properties and help to fundamentally understand path-independent performance in multiple spatial and temporal scales. (topical review)

  16. Large-scale computation at PSI scientific achievements and future requirements

    International Nuclear Information System (INIS)

    Adelmann, A.; Markushin, V.

    2008-11-01

    ' (SNSP-HPCN) is discussing this complex. Scientific results which are made possible by PSI's engagement at CSCS (named Horizon) are summarised and PSI's future high-performance computing requirements are evaluated. The data collected shows the current situation and a 5 year extrapolation of the users' needs with respect to HPC resources is made. In consequence this report can serve as a basis for future strategic decisions with respect to a non-existing HPC road-map for PSI. PSI's institutional HPC area started hardware-wise approximately in 1999 with the assembly of a 32-processor LINUX cluster called Merlin. Merlin was upgraded several times, lastly in 2007. The Merlin cluster at PSI is used for small scale parallel jobs, and is the only general purpose computing system at PSI. Several dedicated small scale clusters followed the Merlin scheme. Many of the clusters are used to analyse data from experiments at PSI or CERN, because dedicated clusters are most efficient. The intellectual and financial involvement of the procurement (including a machine update in 2007) results in a PSI share of 25 % of the available computing resources at CSCS. The (over) usage of available computing resources by PSI scientists is demonstrated. We actually get more computing cycles than we have paid for. The reason is the fair share policy that is implemented on the Horizon machine. This policy allows us to get cycles, with a low priority, even when our bi-monthly share is used. Five important observations can be drawn from the analysis of the scientific output and the survey of future requirements of main PSI HPC users: (1) High Performance Computing is a main pillar in many important PSI research areas; (2) there is a lack in the order of 10 times the current computing resources (measured in available core-hours per year); (3) there is a trend to use in the order of 600 processors per average production run; (4) the disk and tape storage growth is dramatic; (5) small HPC clusters located

  17. Large-scale computation at PSI scientific achievements and future requirements

    Energy Technology Data Exchange (ETDEWEB)

    Adelmann, A.; Markushin, V

    2008-11-15

    and Networking' (SNSP-HPCN) is discussing this complex. Scientific results which are made possible by PSI's engagement at CSCS (named Horizon) are summarised and PSI's future high-performance computing requirements are evaluated. The data collected shows the current situation and a 5 year extrapolation of the users' needs with respect to HPC resources is made. In consequence this report can serve as a basis for future strategic decisions with respect to a non-existing HPC road-map for PSI. PSI's institutional HPC area started hardware-wise approximately in 1999 with the assembly of a 32-processor LINUX cluster called Merlin. Merlin was upgraded several times, lastly in 2007. The Merlin cluster at PSI is used for small scale parallel jobs, and is the only general purpose computing system at PSI. Several dedicated small scale clusters followed the Merlin scheme. Many of the clusters are used to analyse data from experiments at PSI or CERN, because dedicated clusters are most efficient. The intellectual and financial involvement of the procurement (including a machine update in 2007) results in a PSI share of 25 % of the available computing resources at CSCS. The (over) usage of available computing resources by PSI scientists is demonstrated. We actually get more computing cycles than we have paid for. The reason is the fair share policy that is implemented on the Horizon machine. This policy allows us to get cycles, with a low priority, even when our bi-monthly share is used. Five important observations can be drawn from the analysis of the scientific output and the survey of future requirements of main PSI HPC users: (1) High Performance Computing is a main pillar in many important PSI research areas; (2) there is a lack in the order of 10 times the current computing resources (measured in available core-hours per year); (3) there is a trend to use in the order of 600 processors per average production run; (4) the disk and tape storage growth

  18. Development and validation of the computer technology literacy self-assessment scale for Taiwanese elementary school students.

    Science.gov (United States)

    Chang, Chiung-Sui

    2008-01-01

    The purpose of this study was to describe the development and validation of an instrument to identify various dimensions of the computer technology literacy self-assessment scale (CTLS) for elementary school students. The instrument included five CTLS dimensions (subscales): the technology operation skills, the computer usages concepts, the attitudes toward computer technology, the learning with technology, and the Internet operation skills. Participants were 1,539 elementary school students in Taiwan. Data analysis indicated that the instrument developed in the study had satisfactory validity and reliability. Correlations analysis supported the legitimacy of using multiple dimensions in representing students' computer technology literacy. Significant differences were found between male and female students, and between grades on some CTLS dimensions. Suggestions are made for use of the instrument to examine complicated interplays between students' computer behaviors and their computer technology literacy.

  19. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    Science.gov (United States)

    Cotes-Ruiz, Iván Tomás; Prado, Rocío P; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás

    2017-01-01

    Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  20. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    Directory of Open Access Journals (Sweden)

    Iván Tomás Cotes-Ruiz

    Full Text Available Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS. The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  1. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  2. Continuous Spatial Process Models for Spatial Extreme Values

    KAUST Repository

    Sang, Huiyan

    2010-01-28

    We propose a hierarchical modeling approach for explaining a collection of point-referenced extreme values. In particular, annual maxima over space and time are assumed to follow generalized extreme value (GEV) distributions, with parameters μ, σ, and ξ specified in the latent stage to reflect underlying spatio-temporal structure. The novelty here is that we relax the conditionally independence assumption in the first stage of the hierarchial model, an assumption which has been adopted in previous work. This assumption implies that realizations of the the surface of spatial maxima will be everywhere discontinuous. For many phenomena including, e. g., temperature and precipitation, this behavior is inappropriate. Instead, we offer a spatial process model for extreme values that provides mean square continuous realizations, where the behavior of the surface is driven by the spatial dependence which is unexplained under the latent spatio-temporal specification for the GEV parameters. In this sense, the first stage smoothing is viewed as fine scale or short range smoothing while the larger scale smoothing will be captured in the second stage of the modeling. In addition, as would be desired, we are able to implement spatial interpolation for extreme values based on this model. A simulation study and a study on actual annual maximum rainfall for a region in South Africa are used to illustrate the performance of the model. © 2009 International Biometric Society.

  3. Extreme Events and Energy Providers: Science and Innovation

    Science.gov (United States)

    Yiou, P.; Vautard, R.

    2012-04-01

    Most socio-economic regulations related to the resilience to climate extremes, from infrastructure or network design to insurance premiums, are based on a present-day climate with an assumption of stationarity. Climate extremes (heat waves, cold spells, droughts, storms and wind stilling) affect in particular energy production, supply, demand and security in several ways. While national, European or international projects have generated vast amounts of climate projections for the 21st century, their practical use in long-term planning remains limited. Estimating probabilistic diagnostics of energy user relevant variables from those multi-model projections will help the energy sector to elaborate medium to long-term plans, and will allow the assessment of climate risks associated to those plans. The project "Extreme Events for Energy Providers" (E3P) aims at filling a gap between climate science and its practical use in the energy sector and creating in turn favourable conditions for new business opportunities. The value chain ranges from addressing research questions directly related to energy-significant climate extremes to providing innovative tools of information and decision making (including methodologies, best practices and software) and climate science training for the energy sector, with a focus on extreme events. Those tools will integrate the scientific knowledge that is developed by scientific communities, and translate it into a usable probabilistic framework. The project will deliver projection tools assessing the probabilities of future energy-relevant climate extremes at a range of spatial scales varying from pan-European to local scales. The E3P project is funded by the Knowledge and Innovation Community (KIC Climate). We will present the mechanisms of interactions between academic partners, SMEs and industrial partners for this project. Those mechanisms are elementary bricks of a climate service.

  4. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Anthony B., E-mail: acosta@northwestern.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Green, Jason R., E-mail: jason.green@umb.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125 (United States)

    2013-08-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.

  5. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    International Nuclear Information System (INIS)

    Costa, Anthony B.; Green, Jason R.

    2013-01-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N 2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra

  6. Dynamically adaptive data-driven simulation of extreme hydrological flows

    KAUST Repository

    Kumar Jain, Pushkar; Mandli, Kyle; Hoteit, Ibrahim; Knio, Omar; Dawson, Clint

    2017-01-01

    evacuation in real-time and through the development of resilient infrastructure based on knowledge of how systems respond to extreme events. Data-driven computational modeling is a critical technology underpinning these efforts. This investigation focuses

  7. Modeling annual extreme temperature using generalized extreme value distribution: A case study in Malaysia

    Science.gov (United States)

    Hasan, Husna; Salam, Norfatin; Kassim, Suraiya

    2013-04-01

    Extreme temperature of several stations in Malaysia is modeled by fitting the annual maximum to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are used to detect stochastic trends among the stations. The Mann-Kendall (MK) test suggests a non-stationary model. Three models are considered for stations with trend and the Likelihood Ratio test is used to determine the best-fitting model. The results show that Subang and Bayan Lepas stations favour a model which is linear for the location parameters while Kota Kinabalu and Sibu stations are suitable with a model in the logarithm of the scale parameters. The return level is the level of events (maximum temperature) which is expected to be exceeded once, on average, in a given number of years, is obtained.

  8. Is prescribed lower extremity weight-bearing status after geriatric lower extremity trauma associated with increased mortality?

    Science.gov (United States)

    Gitajn, Ida Leah; Connelly, Daniel; Mascarenhas, Daniel; Breazeale, Stephen; Berger, Peter; Schoonover, Carrie; Martin, Brook; O'Toole, Robert V; Pensy, Raymond; Sciadini, Marcus

    2018-02-01

    Evaluate whether mortality after discharge is elevated in geriatric fracture patients whose lower extremity weight-bearing is restricted. Retrospective cohort study SETTING: Urban Level 1 trauma center PATIENTS/PARTICIPANTS: 1746 patients >65 years of age INTERVENTION: Post-operative lower extremity weight-bearing status MAIN OUTCOME MEASURE: Mortality, as determined by the Social Security Death Index RESULTS: Univariate analysis demonstrated that patients who were weight-bearing as tolerated on bilateral lower extremities (BLE) had significantly higher 5-year mortality compared to patients with restricted weight-bearing on one lower extremity and restricted weight-bearing on BLE (30%, 21% and 22% respectively, p bearing as tolerated on BLE, restricted weight-bearing on one lower extremity had a hazard ratio (HR) of 0.97 (95% confidence interval 0.78 to 1.20, p = 0.76) and restricted weight-bearing in BLE had a HR of 0.91 (95% confidence interval 0.60 to 1.36, p = 0.73). In geriatric patients, prescribed weight-bearing status did not have a statistically significant association with mortality after discharge, when controlling for age, sex, body mass index, medical comorbidities, Injury Severity Scale (ISS), mechanism of injury, nonoperative treatment and admission GCS. This remained true in when the analysis was restricted to operative injuries only. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Scale dependency of regional climate modeling of current and future climate extremes in Germany

    Science.gov (United States)

    Tölle, Merja H.; Schefczyk, Lukas; Gutjahr, Oliver

    2017-11-01

    A warmer climate is projected for mid-Europe, with less precipitation in summer, but with intensified extremes of precipitation and near-surface temperature. However, the extent and magnitude of such changes are associated with creditable uncertainty because of the limitations of model resolution and parameterizations. Here, we present the results of convection-permitting regional climate model simulations for Germany integrated with the COSMO-CLM using a horizontal grid spacing of 1.3 km, and additional 4.5- and 7-km simulations with convection parameterized. Of particular interest is how the temperature and precipitation fields and their extremes depend on the horizontal resolution for current and future climate conditions. The spatial variability of precipitation increases with resolution because of more realistic orography and physical parameterizations, but values are overestimated in summer and over mountain ridges in all simulations compared to observations. The spatial variability of temperature is improved at a resolution of 1.3 km, but the results are cold-biased, especially in summer. The increase in resolution from 7/4.5 km to 1.3 km is accompanied by less future warming in summer by 1 ∘C. Modeled future precipitation extremes will be more severe, and temperature extremes will not exclusively increase with higher resolution. Although the differences between the resolutions considered (7/4.5 km and 1.3 km) are small, we find that the differences in the changes in extremes are large. High-resolution simulations require further studies, with effective parameterizations and tunings for different topographic regions. Impact models and assessment studies may benefit from such high-resolution model results, but should account for the impact of model resolution on model processes and climate change.

  10. Pipelining Computational Stages of the Tomographic Reconstructor for Multi-Object Adaptive Optics on a Multi-GPU System

    KAUST Repository

    Charara, Ali

    2014-11-01

    The European Extremely Large Telescope project (E-ELT) is one of Europe\\'s highest priorities in ground-based astronomy. ELTs are built on top of a variety of highly sensitive and critical astronomical instruments. In particular, a new instrument called MOSAIC has been proposed to perform multi-object spectroscopy using the Multi-Object Adaptive Optics (MOAO) technique. The core implementation of the simulation lies in the intensive computation of a tomographic reconstruct or (TR), which is used to drive the deformable mirror in real time from the measurements. A new numerical algorithm is proposed (1) to capture the actual experimental noise and (2) to substantially speed up previous implementations by exposing more concurrency, while reducing the number of floating-point operations. Based on the Matrices Over Runtime System at Exascale numerical library (MORSE), a dynamic scheduler drives all computational stages of the tomographic reconstruct or simulation and allows to pipeline and to run tasks out-of order across different stages on heterogeneous systems, while ensuring data coherency and dependencies. The proposed TR simulation outperforms asymptotically previous state-of-the-art implementations up to 13-fold speedup. At more than 50000 unknowns, this appears to be the largest-scale AO problem submitted to computation, to date, and opens new research directions for extreme scale AO simulations. © 2014 IEEE.

  11. Extreme Science (LBNL Science at the Theater)

    Energy Technology Data Exchange (ETDEWEB)

    Ajo-Franklin, Caroline; Klein, Spencer; Minor, Andrew; Torok, Tamas

    2012-02-27

    On Feb. 27, 2012 at the Berkeley Repertory Theatre, four Berkeley Lab scientists presented talks related to extreme science - and what it means to you. Topics include: Neutrino hunting in Antarctica. Learn why Spencer Klein goes to the ends of the Earth to search for these ghostly particles. From Chernobyl to Central Asia, Tamas Torok travels the globe to study microbial diversity in extreme environments. Andrew Minor uses the world's most advanced electron microscopes to explore materials at ultrahigh stresses and in harsh environments. And microbes that talk to computers? Caroline Ajo-Franklin is pioneering cellular-electrical connections that could help transform sunlight into fuel.

  12. Causal Analysis of the Unanticipated Extremity Exposure at HFEF

    Energy Technology Data Exchange (ETDEWEB)

    David E. James; Charles R. Posegate; Thomas P. Zahn; Alan G. Wagner

    2011-11-01

    This report covers the unintended extremity exposure to an operator while handling a metallurgical mount sample of irradiated fuel following an off-scale high beta radiation reading of the sample. The decision was made to continue working after the meter indicated high off-scale by the HPT Supervisor, which resulted in the operator at the next operation being exposed.

  13. Projecting changes in regional temperature and precipitation extremes in the United States

    OpenAIRE

    Justin T. Schoof; Scott M. Robeson

    2016-01-01

    Regional and local climate extremes, and their impacts, result from the multifaceted interplay between large-scale climate forcing, local environmental factors (physiography), and societal vulnerability. In this paper, we review historical and projected changes in temperature and precipitation extremes in the United States, with a focus on strengths and weaknesses of (1) commonly used definitions for extremes such as thresholds and percentiles, (2) statistical approaches to quantifying change...

  14. Development of computational infrastructure to support hyper-resolution large-ensemble hydrology simulations from local-to-continental scales

    Data.gov (United States)

    National Aeronautics and Space Administration — Development of computational infrastructure to support hyper-resolution large-ensemble hydrology simulations from local-to-continental scales A move is currently...

  15. Multiscale approach including microfibril scale to assess elastic constants of cortical bone based on neural network computation and homogenization method.

    Science.gov (United States)

    Barkaoui, Abdelwahed; Chamekh, Abdessalem; Merzouki, Tarek; Hambli, Ridha; Mkaddem, Ali

    2014-03-01

    The complexity and heterogeneity of bone tissue require a multiscale modeling to understand its mechanical behavior and its remodeling mechanisms. In this paper, a novel multiscale hierarchical approach including microfibril scale based on hybrid neural network (NN) computation and homogenization equations was developed to link nanoscopic and macroscopic scales to estimate the elastic properties of human cortical bone. The multiscale model is divided into three main phases: (i) in step 0, the elastic constants of collagen-water and mineral-water composites are calculated by averaging the upper and lower Hill bounds; (ii) in step 1, the elastic properties of the collagen microfibril are computed using a trained NN simulation. Finite element calculation is performed at nanoscopic levels to provide a database to train an in-house NN program; and (iii) in steps 2-10 from fibril to continuum cortical bone tissue, homogenization equations are used to perform the computation at the higher scales. The NN outputs (elastic properties of the microfibril) are used as inputs for the homogenization computation to determine the properties of mineralized collagen fibril. The mechanical and geometrical properties of bone constituents (mineral, collagen, and cross-links) as well as the porosity were taken in consideration. This paper aims to predict analytically the effective elastic constants of cortical bone by modeling its elastic response at these different scales, ranging from the nanostructural to mesostructural levels. Our findings of the lowest scale's output were well integrated with the other higher levels and serve as inputs for the next higher scale modeling. Good agreement was obtained between our predicted results and literature data. Copyright © 2013 John Wiley & Sons, Ltd.

  16. A SUB-GRID VOLUME-OF-FLUIDS (VOF) MODEL FOR MIXING IN RESOLVED SCALE AND IN UNRESOLVED SCALE COMPUTATIONS

    International Nuclear Information System (INIS)

    Vold, Erik L.; Scannapieco, Tony J.

    2007-01-01

    A sub-grid mix model based on a volume-of-fluids (VOF) representation is described for computational simulations of the transient mixing between reactive fluids, in which the atomically mixed components enter into the reactivity. The multi-fluid model allows each fluid species to have independent values for density, energy, pressure and temperature, as well as independent velocities and volume fractions. Fluid volume fractions are further divided into mix components to represent their 'mixedness' for more accurate prediction of reactivity. Time dependent conversion from unmixed volume fractions (denoted cf) to atomically mixed (af) fluids by diffusive processes is represented in resolved scale simulations with the volume fractions (cf, af mix). In unresolved scale simulations, the transition to atomically mixed materials begins with a conversion from unmixed material to a sub-grid volume fraction (pf). This fraction represents the unresolved small scales in the fluids, heterogeneously mixed by turbulent or multi-phase mixing processes, and this fraction then proceeds in a second step to the atomically mixed fraction by diffusion (cf, pf, af mix). Species velocities are evaluated with a species drift flux, ρ i u di = ρ i (u i -u), used to describe the fluid mixing sources in several closure options. A simple example of mixing fluids during 'interfacial deceleration mixing with a small amount of diffusion illustrates the generation of atomically mixed fluids in two cases, for resolved scale simulations and for unresolved scale simulations. Application to reactive mixing, including Inertial Confinement Fusion (ICF), is planned for future work.

  17. Attribution of extreme rainfall from Hurricane Harvey, August 2017

    Science.gov (United States)

    van Oldenborgh, Geert Jan; van der Wiel, Karin; Sebastian, Antonia; Singh, Roop; Arrighi, Julie; Otto, Friederike; Haustein, Karsten; Li, Sihan; Vecchi, Gabriel; Cullen, Heidi

    2017-12-01

    During August 25-30, 2017, Hurricane Harvey stalled over Texas and caused extreme precipitation, particularly over Houston and the surrounding area on August 26-28. This resulted in extensive flooding with over 80 fatalities and large economic costs. It was an extremely rare event: the return period of the highest observed three-day precipitation amount, 1043.4 mm 3dy-1 at Baytown, is more than 9000 years (97.5% one-sided confidence interval) and return periods exceeded 1000 yr (750 mm 3dy-1) over a large area in the current climate. Observations since 1880 over the region show a clear positive trend in the intensity of extreme precipitation of between 12% and 22%, roughly two times the increase of the moisture holding capacity of the atmosphere expected for 1 °C warming according to the Clausius-Clapeyron (CC) relation. This would indicate that the moisture flux was increased by both the moisture content and stronger winds or updrafts driven by the heat of condensation of the moisture. We also analysed extreme rainfall in the Houston area in three ensembles of 25 km resolution models. The first also shows 2 × CC scaling, the second 1 × CC scaling and the third did not have a realistic representation of extreme rainfall on the Gulf Coast. Extrapolating these results to the 2017 event, we conclude that global warming made the precipitation about 15% (8%-19%) more intense, or equivalently made such an event three (1.5-5) times more likely. This analysis makes clear that extreme rainfall events along the Gulf Coast are on the rise. And while fortifying Houston to fully withstand the impact of an event as extreme as Hurricane Harvey may not be economically feasible, it is critical that information regarding the increasing risk of extreme rainfall events in general should be part of the discussion about future improvements to Houston’s flood protection system.

  18. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    International Nuclear Information System (INIS)

    Evans, D; Fisk, I; Holzman, B; Pordes, R; Tiradani, A; Melo, A; Sheldon, P; Metson, S

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely 'on-demand' as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the 'base-line' needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  19. HPC Colony II: FAST_OS II: Operating Systems and Runtime Systems at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Moreira, Jose [IBM, Armonk, NY (United States)

    2013-11-13

    HPC Colony II has been a 36-month project focused on providing portable performance for leadership class machines—a task made difficult by the emerging variety of more complex computer architectures. The project attempts to move the burden of portable performance to adaptive system software, thereby allowing domain scientists to concentrate on their field rather than the fine details of a new leadership class machine. To accomplish our goals, we focused on adding intelligence into the system software stack. Our revised components include: new techniques to address OS jitter; new techniques to dynamically address load imbalances; new techniques to map resources according to architectural subtleties and application dynamic behavior; new techniques to dramatically improve the performance of checkpoint-restart; and new techniques to address membership service issues at scale.

  20. Development and application of a computer model for large-scale flame acceleration experiments

    International Nuclear Information System (INIS)

    Marx, K.D.

    1987-07-01

    A new computational model for large-scale premixed flames is developed and applied to the simulation of flame acceleration experiments. The primary objective is to circumvent the necessity for resolving turbulent flame fronts; this is imperative because of the relatively coarse computational grids which must be used in engineering calculations. The essence of the model is to artificially thicken the flame by increasing the appropriate diffusivities and decreasing the combustion rate, but to do this in such a way that the burn velocity varies with pressure, temperature, and turbulence intensity according to prespecified phenomenological characteristics. The model is particularly aimed at implementation in computer codes which simulate compressible flows. To this end, it is applied to the two-dimensional simulation of hydrogen-air flame acceleration experiments in which the flame speeds and gas flow velocities attain or exceed the speed of sound in the gas. It is shown that many of the features of the flame trajectories and pressure histories in the experiments are simulated quite well by the model. Using the comparison of experimental and computational results as a guide, some insight is developed into the processes which occur in such experiments. 34 refs., 25 figs., 4 tabs

  1. Seasonal Cycle in German Daily Precipitation Extremes

    Directory of Open Access Journals (Sweden)

    Madlen Fischer

    2018-01-01

    Full Text Available The seasonal cycle of extreme precipitation in Germany is investigated by fitting statistical models to monthly maxima of daily precipitation sums for 2,865 rain gauges. The basis is a non-stationary generalized extreme value (GEV distribution variation of location and scale parameters. The negative log-likelihood serves as the forecast error for a cross validation to select adequate orders of the harmonic functions for each station. For nearly all gauges considered, the seasonal model is more appropriate to estimate return levels on a monthly scale than a stationary GEV used for individual months. The 100-year return-levels show the influence of cyclones in the western, and convective events in the eastern part of Germany. In addition to resolving the seasonality, we use a simulation study to show that annual return levels can be estimated more precisely from a monthly-resolved seasonal model than from a stationary model based on annual maxima.

  2. Calculation of neutron fluence-to-dose conversion factors for extremities

    International Nuclear Information System (INIS)

    Stewart, R.D.; Harty, R.; McDonald, J.C.; Tanner, J.E.

    1993-04-01

    The Pacific Northwest Laboratory is developing a standard for the performance testing of personnel extremity dosimeters for the US Department of Energy. Part of this effort requires the calculation of neutron fluence-to-dose conversion factors for finger and wrist extremities. This study focuses on conversion factors for two types of extremity models: namely the polymethyl methacrylate (PMMA) phantom (as specified in the draft standard for performance testing of extremity dosimeters) and more realistic extremity models composed of tissue-and-bone. Calculations for each type of model are based on both bare and D 2 O-moderated 252 Cf sources. The results are then tabulated and compared with whole-body conversion factors. More appropriate energy-averaged quality factors for the extremity models have also been computed from the neutron fluence in 50 equally spaced energy bins with energies from 2.53 x 10 -8 to 15 MeV. Tabulated results show that conversion factors for both types of extremity phantom are 15 to 30% lower than the corresponcung whole-body phantom conversion factors for 252 Cf neutron sources. This difference in extremity and whole-body conversion factors is attributable to the proportionally smaller amount of back-scattering that occurs in the extremity phantoms compared with whole-body phantoms

  3. Future Extreme Heat Scenarios to Enable the Assessment of Climate Impacts on Public Health over the Coterminous U.S.

    Science.gov (United States)

    Quattrochi, Dale A.; Crosson, William L.; Al-Hamdan, Mohammad Z.; Estes, Maurice G., Jr.

    2013-01-01

    In the United States, extreme heat is the most deadly weather-related hazard. In the face of a warming climate and urbanization, which contributes to local-scale urban heat islands, it is very likely that extreme heat events (EHEs) will become more common and more severe in the U.S. This research seeks to provide historical and future measures of climate-driven extreme heat events to enable assessments of the impacts of heat on public health over the coterminous U.S. We use atmospheric temperature and humidity information from meteorological reanalysis and from Global Climate Models (GCMs) to provide data on past and future heat events. The focus of research is on providing assessments of the magnitude, frequency and geographic distribution of extreme heat in the U.S. to facilitate public health studies. In our approach, long-term climate change is captured with GCM outputs, and the temporal and spatial characteristics of short-term extremes are represented by the reanalysis data. Two future time horizons for 2040 and 2090 are compared to the recent past period of 1981- 2000. We characterize regional-scale temperature and humidity conditions using GCM outputs for two climate change scenarios (A2 and A1B) defined in the Special Report on Emissions Scenarios (SRES). For each future period, 20 years of multi-model GCM outputs are analyzed to develop a 'heat stress climatology' based on statistics of extreme heat indicators. Differences between the two future and the past period are used to define temperature and humidity changes on a monthly time scale and regional spatial scale. These changes are combined with the historical meteorological data, which is hourly and at a spatial scale (12 km), to create future climate realizations. From these realizations, we compute the daily heat stress measures and related spatially-specific climatological fields, such as the mean annual number of days above certain thresholds of maximum and minimum air temperatures, heat indices

  4. A comparative analysis of support vector machines and extreme learning machines.

    Science.gov (United States)

    Liu, Xueyi; Gao, Chuanhou; Li, Ping

    2012-09-01

    The theory of extreme learning machines (ELMs) has recently become increasingly popular. As a new learning algorithm for single-hidden-layer feed-forward neural networks, an ELM offers the advantages of low computational cost, good generalization ability, and ease of implementation. Hence the comparison and model selection between ELMs and other kinds of state-of-the-art machine learning approaches has become significant and has attracted many research efforts. This paper performs a comparative analysis of the basic ELMs and support vector machines (SVMs) from two viewpoints that are different from previous works: one is the Vapnik-Chervonenkis (VC) dimension, and the other is their performance under different training sample sizes. It is shown that the VC dimension of an ELM is equal to the number of hidden nodes of the ELM with probability one. Additionally, their generalization ability and computational complexity are exhibited with changing training sample size. ELMs have weaker generalization ability than SVMs for small sample but can generalize as well as SVMs for large sample. Remarkably, great superiority in computational speed especially for large-scale sample problems is found in ELMs. The results obtained can provide insight into the essential relationship between them, and can also serve as complementary knowledge for their past experimental and theoretical comparisons. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. The limits of quantum computers

    International Nuclear Information System (INIS)

    Aaronson, S.

    2008-01-01

    Future computers, which work with quantum bits, would indeed solve some special problems extremely fastly, but for the most problems the would hardly be superior to contemporary computers. This knowledge could manifest a new fundamental physical principle

  6. Full-color large-scaled computer-generated holograms using RGB color filters.

    Science.gov (United States)

    Tsuchiyama, Yasuhiro; Matsushima, Kyoji

    2017-02-06

    A technique using RGB color filters is proposed for creating high-quality full-color computer-generated holograms (CGHs). The fringe of these CGHs is composed of more than a billion pixels. The CGHs reconstruct full-parallax three-dimensional color images with a deep sensation of depth caused by natural motion parallax. The simulation technique as well as the principle and challenges of high-quality full-color reconstruction are presented to address the design of filter properties suitable for large-scaled CGHs. Optical reconstructions of actual fabricated full-color CGHs are demonstrated in order to verify the proposed techniques.

  7. Using Amazon's Elastic Compute Cloud to scale CMS' compute hardware dynamically.

    CERN Document Server

    Melo, Andrew Malone

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud-computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely on-demand as limits and caps on usage are imposed. Our trial workflows allow us t...

  8. Analysis of a computational benchmark for a high-temperature reactor using SCALE

    International Nuclear Information System (INIS)

    Goluoglu, S.

    2006-01-01

    Several proposed advanced reactor concepts require methods to address effects of double heterogeneity. In doubly heterogeneous systems, heterogeneous fuel particles in a moderator matrix form the fuel region of the fuel element and thus constitute the first level of heterogeneity. Fuel elements themselves are also heterogeneous with fuel and moderator or reflector regions, forming the second level of heterogeneity. The fuel elements may also form regular or irregular lattices. A five-phase computational benchmark for a high-temperature reactor (HTR) fuelled with uranium or reactor-grade plutonium has been defined by the Organization for Economic Cooperation and Development, Nuclear Energy Agency (OECD NEA), Nuclear Science Committee, Working Party on the Physics of Plutonium Fuels and Innovative Fuel Cycles. This paper summarizes the analysis results using the latest SCALE code system (to be released in CY 2006 as SCALE 5.1). (authors)

  9. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    International Nuclear Information System (INIS)

    Khaleel, Mohammad A.

    2009-01-01

    This report is an account of the deliberations and conclusions of the workshop on 'Forefront Questions in Nuclear Science and the Role of High Performance Computing' held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to (1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; (2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; (3) provide nuclear physicists the opportunity to influence the development of high performance computing; and (4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  10. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  11. Web-based Visual Analytics for Extreme Scale Climate Science

    Energy Technology Data Exchange (ETDEWEB)

    Steed, Chad A [ORNL; Evans, Katherine J [ORNL; Harney, John F [ORNL; Jewell, Brian C [ORNL; Shipman, Galen M [ORNL; Smith, Brian E [ORNL; Thornton, Peter E [ORNL; Williams, Dean N. [Lawrence Livermore National Laboratory (LLNL)

    2014-01-01

    In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via new visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.

  12. Portfolio optimization for heavy-tailed assets: Extreme Risk Index vs. Markowitz

    OpenAIRE

    Mainik, Georg; Mitov, Georgi; Rüschendorf, Ludger

    2015-01-01

    Using daily returns of the S&P 500 stocks from 2001 to 2011, we perform a backtesting study of the portfolio optimization strategy based on the extreme risk index (ERI). This method uses multivariate extreme value theory to minimize the probability of large portfolio losses. With more than 400 stocks to choose from, our study seems to be the first application of extreme value techniques in portfolio management on a large scale. The primary aim of our investigation is the potential of ERI in p...

  13. Large scale statistics for computational verification of grain growth simulations with experiments

    International Nuclear Information System (INIS)

    Demirel, Melik C.; Kuprat, Andrew P.; George, Denise C.; Straub, G.K.; Misra, Amit; Alexander, Kathleen B.; Rollett, Anthony D.

    2002-01-01

    It is known that by controlling microstructural development, desirable properties of materials can be achieved. The main objective of our research is to understand and control interface dominated material properties, and finally, to verify experimental results with computer simulations. We have previously showed a strong similarity between small-scale grain growth experiments and anisotropic three-dimensional simulations obtained from the Electron Backscattered Diffraction (EBSD) measurements. Using the same technique, we obtained 5170-grain data from an Aluminum-film (120 (micro)m thick) with a columnar grain structure. Experimentally obtained starting microstructure and grain boundary properties are input for the three-dimensional grain growth simulation. In the computational model, minimization of the interface energy is the driving force for the grain boundary motion. The computed evolved microstructure is compared with the final experimental microstructure, after annealing at 550 C. Characterization of the structures and properties of grain boundary networks (GBN) to produce desirable microstructures is one of the fundamental problems in interface science. There is an ongoing research for the development of new experimental and analytical techniques in order to obtain and synthesize information related to GBN. The grain boundary energy and mobility data were characterized by Electron Backscattered Diffraction (EBSD) technique and Atomic Force Microscopy (AFM) observations (i.e., for ceramic MgO and for the metal Al). Grain boundary energies are extracted from triple junction (TJ) geometry considering the local equilibrium condition at TJ's. Relative boundary mobilities were also extracted from TJ's through a statistical/multiscale analysis. Additionally, there are recent theoretical developments of grain boundary evolution in microstructures. In this paper, a new technique for three-dimensional grain growth simulations was used to simulate interface migration

  14. Comparison and Evolution of Extreme Rainfall-Induced Landslides in Taiwan

    Directory of Open Access Journals (Sweden)

    Chunhung WU

    2017-11-01

    Full Text Available This study analyzed the characteristics of, and locations prone to, extreme rainfall-induced landslides in three watersheds in Taiwan, as well as the long-term evolution of landslides in the Laonong River watershed (LRW, based on multiannual landslide inventories during 2003–2014. Extreme rainfall-induced landslides were centralized beside sinuous or meandering reaches, especially those with large sediment deposition. Landslide-prone strata during extreme rainfall events were sandstone and siltstone. Large-scale landslides were likely to occur when the maximum 6-h accumulated rainfall exceeded 420 mm. All of the large-scale landslides induced by short-duration and high-intensity rainfall developed from historical small-scale landslides beside the sinuous or meandering reaches or in the source area of rivers. However, most of the large-scale landslides induced by long-duration and high-intensity rainfall were new but were still located beside sinuous or meandering reaches or near the source. The frequency density of landslides under long-duration and high-intensity rainfall was larger by one order than those under short-duration rainfall, and the β values in the landslide frequency density-area analysis ranged from 1.22 to 1.348. The number of downslope landslides was three times larger than those of midslope and upslope landslides. The extreme rainfall-induced landslides occurred in the erosion gullies upstream of the watersheds, whereas those beside rivers were downstream. Analysis of the long-term evolution of landslides in the LRW showed that the geological setting, sinuousness of reaches, and sediment yield volume determined their location and evolution. Small-scale landslides constituted 71.9–96.2% of the total cases from 2003 to 2014, and were more easily induced after Typhoon Morakot (2009. The frequency density of landslides after Morakot was greater by one order than before, with 61% to 68% of total landslides located in the

  15. 3D fast adaptive correlation imaging for large-scale gravity data based on GPU computation

    Science.gov (United States)

    Chen, Z.; Meng, X.; Guo, L.; Liu, G.

    2011-12-01

    In recent years, large scale gravity data sets have been collected and employed to enhance gravity problem-solving abilities of tectonics studies in China. Aiming at the large scale data and the requirement of rapid interpretation, previous authors have carried out a lot of work, including the fast gradient module inversion and Euler deconvolution depth inversion ,3-D physical property inversion using stochastic subspaces and equivalent storage, fast inversion using wavelet transforms and a logarithmic barrier method. So it can be say that 3-D gravity inversion has been greatly improved in the last decade. Many authors added many different kinds of priori information and constraints to deal with nonuniqueness using models composed of a large number of contiguous cells of unknown property and obtained good results. However, due to long computation time, instability and other shortcomings, 3-D physical property inversion has not been widely applied to large-scale data yet. In order to achieve 3-D interpretation with high efficiency and precision for geological and ore bodies and obtain their subsurface distribution, there is an urgent need to find a fast and efficient inversion method for large scale gravity data. As an entirely new geophysical inversion method, 3D correlation has a rapid development thanks to the advantage of requiring no a priori information and demanding small amount of computer memory. This method was proposed to image the distribution of equivalent excess masses of anomalous geological bodies with high resolution both longitudinally and transversely. In order to tranform the equivalence excess masses into real density contrasts, we adopt the adaptive correlation imaging for gravity data. After each 3D correlation imaging, we change the equivalence into density contrasts according to the linear relationship, and then carry out forward gravity calculation for each rectangle cells. Next, we compare the forward gravity data with real data, and

  16. Computational methods using weighed-extreme learning machine to predict protein self-interactions with protein evolutionary information.

    Science.gov (United States)

    An, Ji-Yong; Zhang, Lei; Zhou, Yong; Zhao, Yu-Jun; Wang, Da-Fu

    2017-08-18

    Self-interactions Proteins (SIPs) is important for their biological activity owing to the inherent interaction amongst their secondary structures or domains. However, due to the limitations of experimental Self-interactions detection, one major challenge in the study of prediction SIPs is how to exploit computational approaches for SIPs detection based on evolutionary information contained protein sequence. In the work, we presented a novel computational approach named WELM-LAG, which combined the Weighed-Extreme Learning Machine (WELM) classifier with Local Average Group (LAG) to predict SIPs based on protein sequence. The major improvement of our method lies in presenting an effective feature extraction method used to represent candidate Self-interactions proteins by exploring the evolutionary information embedded in PSI-BLAST-constructed position specific scoring matrix (PSSM); and then employing a reliable and robust WELM classifier to carry out classification. In addition, the Principal Component Analysis (PCA) approach is used to reduce the impact of noise. The WELM-LAG method gave very high average accuracies of 92.94 and 96.74% on yeast and human datasets, respectively. Meanwhile, we compared it with the state-of-the-art support vector machine (SVM) classifier and other existing methods on human and yeast datasets, respectively. Comparative results indicated that our approach is very promising and may provide a cost-effective alternative for predicting SIPs. In addition, we developed a freely available web server called WELM-LAG-SIPs to predict SIPs. The web server is available at http://219.219.62.123:8888/WELMLAG/ .

  17. [Extreme Climatic Events in the Altai Republic According to Dendrochronological Data].

    Science.gov (United States)

    Barinov, V V; Myglan, V S; Nazarov, A N; Vaganov, E A; Agatova, A R; Nepop, R K

    2016-01-01

    The results of dating of extreme climatic events by damage to the anatomical structure and missing tree rings of the Siberian larch in the upper forest boundary of the Altai Republic are given. An analysis of the spatial distribution of the revealed dates over seven plots (Kokcy, Chind, Ak-ha, Jelo, Tute, Tara, and Sukor) allowed us to distinguish the extreme events on interregional (1700, 1783, 1788, 1812, 1814, 1884), regional (1724, 1775, 1784, 1835, 1840, 1847, 1850, 1852, 1854, 1869, 1871, 1910, 1917, 1927, 1938, 1958, 1961), and local (1702, 1736, 1751, 1785, 1842, 1843,1874, 1885, 1886, 1919, 2007, and 2009) scales. It was shown that the events of an interregional scale correspond with the dates of major volcanic eruptions (Grimsvotn, Lakagigar, Etna, Awu, Tambora, Soufriere St. Vinsent, Mayon, and Krakatau volcanos) and extreme climatic events, crop failures, lean years, etc., registered in historical sources.

  18. Understanding extreme sea levels for broad-scale coastal impact and adaptation analysis

    NARCIS (Netherlands)

    Wahl, T.; Haigh, I.D.; Nicholls, R.J.; Arns, A.; Dangendorf, S.; Hinkel, J.; Slangen, A.B.A.

    2017-01-01

    One of the main consequences of mean sea level rise (SLR) on human settlements is an increase in flood risk due to an increase in the intensity and frequency of extreme sea levels (ESL). While substantial research efforts are directed towards quantifying projections and uncertainties of future

  19. Computational and Experimental Investigations of the Molecular Scale Structure and Dynamics of Gologically Important Fluids and Mineral-Fluid Interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Bowers, Geoffrey [Alfred Univ., NY (United States)

    2017-04-05

    United States Department of Energy grant DE-FG02-10ER16128, “Computational and Spectroscopic Investigations of the Molecular Scale Structure and Dynamics of Geologically Important Fluids and Mineral-Fluid Interfaces” (Geoffrey M. Bowers, P.I.) focused on developing a molecular-scale understanding of processes that occur in fluids and at solid-fluid interfaces using the combination of spectroscopic, microscopic, and diffraction studies with molecular dynamics computer modeling. The work is intimately tied to the twin proposal at Michigan State University (DOE DE-FG02-08ER15929; same title: R. James Kirkpatrick, P.I. and A. Ozgur Yazaydin, co-P.I.).

  20. Really Large Scale Computer Graphic Projection Using Lasers and Laser Substitutes

    Science.gov (United States)

    Rother, Paul

    1989-07-01

    This paper reflects on past laser projects to display vector scanned computer graphic images onto very large and irregular surfaces. Since the availability of microprocessors and high powered visible lasers, very large scale computer graphics projection have become a reality. Due to the independence from a focusing lens, lasers easily project onto distant and irregular surfaces and have been used for amusement parks, theatrical performances, concert performances, industrial trade shows and dance clubs. Lasers have been used to project onto mountains, buildings, 360° globes, clouds of smoke and water. These methods have proven successful in installations at: Epcot Theme Park in Florida; Stone Mountain Park in Georgia; 1984 Olympics in Los Angeles; hundreds of Corporate trade shows and thousands of musical performances. Using new ColorRayTM technology, the use of costly and fragile lasers is no longer necessary. Utilizing fiber optic technology, the functionality of lasers can be duplicated for new and exciting projection possibilities. The use of ColorRayTM technology has enjoyed worldwide recognition in conjunction with Pink Floyd and George Michaels' world wide tours.

  1. Are extreme hydro-meteorological events a prerequisite for extreme water quality impacts? Exploring climate impacts on inland and coastal waters

    Science.gov (United States)

    Michalak, A. M.; Balaji, V.; Del Giudice, D.; Sinha, E.; Zhou, Y.; Ho, J. C.

    2017-12-01

    Questions surrounding water sustainability, climate change, and extreme events are often framed around water quantity - whether too much or too little. The massive impacts of extreme water quality impairments are equally compelling, however. Recent years have provided a host of compelling examples, with unprecedented harmful algal blooms developing along the West coast, in Utah Lake, in Lake Erie, and off the Florida coast, and huge hypoxic dead zones continuing to form in regions such as Lake Erie, the Chesapeake Bay, and the Gulf of Mexico. Linkages between climate change, extreme events, and water quality impacts are not well understood, however. Several factors explain this lack of understanding, including the relative complexity of underlying processes, the spatial and temporal scale mismatch between hydrologists and climatologists, and observational uncertainty leading to ambiguities in the historical record. Here, we draw on a number of recent studies that aim to quantitatively link meteorological variability and water quality impacts to test the hypothesis that extreme water quality impairments are the result of extreme hydro-meteorological events. We find that extreme hydro-meteorological events are neither always a necessary nor a sufficient condition for the occurrence of extreme water quality impacts. Rather, extreme water quality impairments often occur in situations where multiple contributing factors compound, which complicates both attribution of historical events and the ability to predict the future incidence of such events. Given the critical societal importance of water quality projections, a concerted program of uncertainty reduction encompassing observational and modeling components will be needed to examine situations where extreme weather plays an important, but not solitary, role in the chain of cause and effect.

  2. Methodology for featuring and assessing extreme climatic events

    International Nuclear Information System (INIS)

    Malleron, N.; Bernardara, P.; Benoit, M.; Parey, S.; Perret, C.

    2013-01-01

    The setting up of a nuclear power plant on a particular site requires the assessment of risks linked to extreme natural events like flooding or earthquakes. As a consequence of the Fukushima accident EDF proposes to take into account even rarer events in order to improve the robustness of the facility all over its operating life. This article presents the methodology used by EDF to analyse a set of data in a statistical way in order to extract extreme values. This analysis is based on the theory of extreme values and is applied to the extreme values of the flow rate in the case of a river overflowing. This methodology is made of 6 steps: 1) selection of the event, of its featuring parameter and of its probability, for instance the question is what is the flow rate of a flooding that has a probability of 10 -3 to happen, 2) to collect data over a long period of time (or to recover data from past periods), 3) to extract extreme values from the data, 4) to find an adequate statistical law that fits the spreading of the extreme values, 5) the selected statistical law must be validated through visual or statistical tests, and 6) the computation of the flow rate of the event itself. (A.C.)

  3. Micro-computed tomography pore-scale study of flow in porous media: Effect of voxel resolution

    Science.gov (United States)

    Shah, S. M.; Gray, F.; Crawshaw, J. P.; Boek, E. S.

    2016-09-01

    A fundamental understanding of flow in porous media at the pore-scale is necessary to be able to upscale average displacement processes from core to reservoir scale. The study of fluid flow in porous media at the pore-scale consists of two key procedures: Imaging - reconstruction of three-dimensional (3D) pore space images; and modelling such as with single and two-phase flow simulations with Lattice-Boltzmann (LB) or Pore-Network (PN) Modelling. Here we analyse pore-scale results to predict petrophysical properties such as porosity, single-phase permeability and multi-phase properties at different length scales. The fundamental issue is to understand the image resolution dependency of transport properties, in order to up-scale the flow physics from pore to core scale. In this work, we use a high resolution micro-computed tomography (micro-CT) scanner to image and reconstruct three dimensional pore-scale images of five sandstones (Bentheimer, Berea, Clashach, Doddington and Stainton) and five complex carbonates (Ketton, Estaillades, Middle Eastern sample 3, Middle Eastern sample 5 and Indiana Limestone 1) at four different voxel resolutions (4.4 μm, 6.2 μm, 8.3 μm and 10.2 μm), scanning the same physical field of view. Implementing three phase segmentation (macro-pore phase, intermediate phase and grain phase) on pore-scale images helps to understand the importance of connected macro-porosity in the fluid flow for the samples studied. We then compute the petrophysical properties for all the samples using PN and LB simulations in order to study the influence of voxel resolution on petrophysical properties. We then introduce a numerical coarsening scheme which is used to coarsen a high voxel resolution image (4.4 μm) to lower resolutions (6.2 μm, 8.3 μm and 10.2 μm) and study the impact of coarsening data on macroscopic and multi-phase properties. Numerical coarsening of high resolution data is found to be superior to using a lower resolution scan because it

  4. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  5. Validity of two methods to assess computer use: Self-report by questionnaire and computer use software

    NARCIS (Netherlands)

    Douwes, M.; Kraker, H.de; Blatter, B.M.

    2007-01-01

    A long duration of computer use is known to be positively associated with Work Related Upper Extremity Disorders (WRUED). Self-report by questionnaire is commonly used to assess a worker's duration of computer use. The aim of the present study was to assess the validity of self-report and computer

  6. A Robust Computational Technique for Model Order Reduction of Two-Time-Scale Discrete Systems via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Othman M. K. Alsmadi

    2015-01-01

    Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  7. Using damage data to estimate the risk from summer convective precipitation extremes

    Science.gov (United States)

    Schroeer, Katharina; Tye, Mari

    2017-04-01

    This study explores the potential added value from including loss and damage data to understand the risks from high-intensity short-duration convective precipitation events. Projected increases in these events are expected even in regions that are likely to become more arid. Such high intensity precipitation events can trigger hazardous flash floods, debris flows, and landslides that put people and local assets at risk. However, the assessment of local scale precipitation extremes is hampered by its high spatial and temporal variability. In addition to this, not only are extreme events rare, but such small-scale events are likely to be underreported where they do not coincide with the observation network. Reports of private loss and damage on a local administrative unit scale (LAU 2 level) are used to explore the relationship between observed rainfall events and damages reportedly related to hydro-meteorological processes. With 480 Austrian municipalities located within our south-eastern Alpine study region, the damage data are available on a much smaller scale than the available rainfall data. Precipitation is recorded daily at 185 gauges and 52% of these stations additionally deliver sub-hourly rainfall information. To obtain physically plausible information, damage and rainfall data are grouped and analyzed on a catchment scale. The data indicate that rainfall intensities are higher on days that coincide with a damage claim than on days for which no damage was reported. However, approximately one third of the damages related to hydro-meteorological hazards were claimed on days for which no rainfall was recorded at any gauge in the respective catchment. Our goal is to assess whether these events indicate potential extreme events missing in the observations. Damage always is a consequence of an asset being exposed and susceptible to a hazardous process, and naturally, many factors influence whether an extreme rainfall event causes damage. We set up a statistical

  8. Scalable ParaView for Extreme Scale Visualization, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Petscale computing is leading to significant breakthroughs in a number of fields and is revolutionizing the way science is conducted. Data is not knowledge, however,...

  9. Nonlinear wave-mixing processes in the extreme ultraviolet

    International Nuclear Information System (INIS)

    Misoguti, L.; Christov, I. P.; Backus, S.; Murnane, M. M.; Kapteyn, H. C.

    2005-01-01

    We present data from two-color high-order harmonic generation in a hollow waveguide, that suggest the presence of a nonlinear-optical frequency conversion process driven by extreme ultraviolet light. By combining the fundamental and second harmonic of an 800 nm laser in a hollow-core fiber, with varying relative polarizations, and by observing the pressure and power scaling of the various harmonic orders, we show that the data are consistent with a picture where we drive the process of high-harmonic generation, which in turn drives four-wave frequency mixing processes in the extreme EUV. This work promises a method for extending nonlinear optics into the extreme ultraviolet region of the spectrum using an approach that has not previously been considered, and has compelling implications for generating tunable light at short wavelengths

  10. Conservative treatment of soft tissue sarcomas of the extremities. Functional evaluation with LENT-SOMA scales and the Enneking score

    International Nuclear Information System (INIS)

    Tawfiq, N.; Lagarde, P.; Thomas, L.; Kantor, G.; Stockle, E.; Bui, B.N.

    2000-01-01

    Objective. - The aim of this prospective study is the feasibility of late effects assessment by LENT-SOMA scales after conservative treatment of soft tissue sarcomas of the extremities and a comparison with the functional evaluation by the Enneking score. Patients and methods. - During the systematic follow-up consultations, a series of 32 consecutive patients was evaluated in terms of late effects by LENT SOMA scales and functional results by the Enneking score. The median time after treatment was 65 months. The treatment consisted of conservative surgery (all cases) followed by radiation therapy (29 cases), often combined with adjuvant therapy (12 concomitant radio-chemotherapy association cases out of 14). The assessment of the toxicity was retrospective for acute effects and prospective for the following late tissue damage: skin/subcutaneous tissues, muscles/soft tissues and peripheral nerves. Results. -According to the Enneking score, the global score for the overall series was high (24/30) despite four the scores zero for the psychological acceptance. According to LENT SOMA scales, a low rate of severe sequelae (grade 3-4) was observed. The occurrence of high-grade sequelae and their functional consequences were not correlated with quality of exeresis, dose of radiotherapy or use of concomitant chemotherapy. A complementarity was observed between certain factors of the Enneking score and some criteria of the LENTSOMA scales, especially of muscles/soft tissues. Conclusion. -The good quality of functional results was confirmed by the two mean scoring systems for late normal tissue damage. The routine use of LENT-SOMA seems to be more time consuming than the Enneking score (mean time of scoring: 1 3 versus five minutes). The LENT-SOMA scales are aimed at a detailed description of late toxicity and sequelae while the Enneking score provides a more global evaluation, including the psychological acceptance of treatment. The late effects assessment by the LENT

  11. Physical and mechanical metallurgy of zirconium alloys for nuclear applications: a multi-scale computational study

    Energy Technology Data Exchange (ETDEWEB)

    Glazoff, Michael Vasily [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-10-01

    In the post-Fukushima world, the stability of materials under extreme conditions is an important issue for the safety of nuclear reactors. Because the nuclear industry is going to continue using advanced zirconium cladding materials in the foreseeable future, it become critical to gain fundamental understanding of the several interconnected problems. First, what are the thermodynamic and kinetic factors affecting the oxidation and hydrogen pick-up by these materials at normal, off-normal conditions, and in long-term storage? Secondly, what protective coatings (if any) could be used in order to gain extremely valuable time at off-normal conditions, e.g., when temperature exceeds the critical value of 2200°F? Thirdly, the kinetics of oxidation of such protective coating or braiding needs to be quantified. Lastly, even if some degree of success is achieved along this path, it is absolutely critical to have automated inspection algorithms allowing identifying defects of cladding as soon as possible. This work strives to explore these interconnected factors from the most advanced computational perspective, utilizing such modern techniques as first-principles atomistic simulations, computational thermodynamics of materials, diffusion modeling, and the morphological algorithms of image processing for defect identification. Consequently, it consists of the four parts dealing with these four problem areas preceded by the introduction and formulation of the studied problems. In the 1st part an effort was made to employ computational thermodynamics and ab initio calculations to shed light upon the different stages of oxidation of ziraloys (2 and 4), the role of microstructure optimization in increasing their thermal stability, and the process of hydrogen pick-up, both in normal working conditions and in long-term storage. The 2nd part deals with the need to understand the influence and respective roles of the two different plasticity mechanisms in Zr nuclear alloys: twinning

  12. The NASA Ames PAH IR Spectroscopic Database: Computational Version 3.00 with Updated Content and the Introduction of Multiple Scaling Factors

    Science.gov (United States)

    Bauschlicher, Charles W., Jr.; Ricca, A.; Boersma, C.; Allamandola, L. J.

    2018-02-01

    Version 3.00 of the library of computed spectra in the NASA Ames PAH IR Spectroscopic Database (PAHdb) is described. Version 3.00 introduces the use of multiple scale factors, instead of the single scaling factor used previously, to align the theoretical harmonic frequencies with the experimental fundamentals. The use of multiple scale factors permits the use of a variety of basis sets; this allows new PAH species to be included in the database, such as those containing oxygen, and yields an improved treatment of strained species and those containing nitrogen. In addition, the computed spectra of 2439 new PAH species have been added. The impact of these changes on the analysis of an astronomical spectrum through database-fitting is considered and compared with a fit using Version 2.00 of the library of computed spectra. Finally, astronomical constraints are defined for the PAH spectral libraries in PAHdb.

  13. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Control modules C4, C6

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U. S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume is part of the manual related to the control modules for the newest updated version of this computational package.

  14. Computer tomography for rare soft tissue tumours of the extremities

    International Nuclear Information System (INIS)

    Boettger, E.; Semerak, M.; Stoltze, D.; Rossak, K.

    1979-01-01

    Five patients with undiagnosed soft tissue masses in the extremities were examined and in two a pathological diagnosis could be made. One was an extensive, invasive fibroma (desmoid) 22 cm long which could be followed from the thigh almost into the pelvis. It was sharply demarkated form the surrounding muscles and of higher density. The second case was a 12 cm long cavernous haemangioma in the semi-membranosus muscle. This was originally hypo-dense, but showed marked increase in its density after the administration of contrast. (orig.) [de

  15. Distributed Extreme Learning Machine for Nonlinear Learning over Network

    Directory of Open Access Journals (Sweden)

    Songyan Huang

    2015-02-01

    Full Text Available Distributed data collection and analysis over a network are ubiquitous, especially over a wireless sensor network (WSN. To our knowledge, the data model used in most of the distributed algorithms is linear. However, in real applications, the linearity of systems is not always guaranteed. In nonlinear cases, the single hidden layer feedforward neural network (SLFN with radial basis function (RBF hidden neurons has the ability to approximate any continuous functions and, thus, may be used as the nonlinear learning system. However, confined by the communication cost, using the distributed version of the conventional algorithms to train the neural network directly is usually prohibited. Fortunately, based on the theorems provided in the extreme learning machine (ELM literature, we only need to compute the output weights of the SLFN. Computing the output weights itself is a linear learning problem, although the input-output mapping of the overall SLFN is still nonlinear. Using the distributed algorithmto cooperatively compute the output weights of the SLFN, we obtain a distributed extreme learning machine (dELM for nonlinear learning in this paper. This dELM is applied to the regression problem and classification problem to demonstrate its effectiveness and advantages.

  16. Automatic computation of moment magnitudes for small earthquakes and the scaling of local to moment magnitude

    OpenAIRE

    Edwards, Benjamin; Allmann, Bettina; Fäh, Donat; Clinton, John

    2017-01-01

    Moment magnitudes (MW) are computed for small and moderate earthquakes using a spectral fitting method. 40 of the resulting values are compared with those from broadband moment tensor solutions and found to match with negligible offset and scatter for available MW values of between 2.8 and 5.0. Using the presented method, MW are computed for 679 earthquakes in Switzerland with a minimum ML= 1.3. A combined bootstrap and orthogonal L1 minimization is then used to produce a scaling relation bet...

  17. Resolving the three-dimensional microstructure of polymer electrolyte fuel cell electrodes using nanometer-scale X-ray computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Epting, William K.; Gelb, Jeff; Litster, Shawn

    2012-02-08

    The electrodes of a polymer electrolyte fuel cell (PEFC) are composite porous layers consisting of carbon and platinum nanoparticles and a polymer electrolyte binder. The proper composition and arrangement of these materials for fast reactant transport and high electrochemical activity is crucial to achieving high performance, long lifetimes, and low costs. Here, the microstructure of a PEFC electrode using nanometer-scale X-ray computed tomography (nano-CT) with a resolution of 50 nm is investigated. The nano-CT instrument obtains this resolution for the low-atomic-number catalyst support and binder using a combination of a Fresnel zone plate objective and Zernike phase contrast imaging. High-resolution, non-destructive imaging of the three-dimensional (3D) microstructures provides important new information on the size and form of the catalyst particle agglomerates and pore spaces. Transmission electron microscopy (TEM) and mercury intrusion porosimetry (MIP) is applied to evaluate the limits of the resolution and to verify the 3D reconstructions. The computational reconstructions and size distributions obtained with nano-CT can be used for evaluating electrode preparation, performing pore-scale simulations, and extracting effective morphological parameters for large-scale computational models. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  18. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2015-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, MODIS, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. HySDS is a Hybrid-Cloud Science Data System that has been developed and applied under NASA AIST, MEaSUREs, and ACCESS grants. HySDS uses the SciFlow workflow engine to partition analysis workflows into parallel tasks (e.g. segmenting by time or space) that are pushed into a durable job queue. The tasks are "pulled" from the queue by worker Virtual Machines (VM's) and executed in an on-premise Cloud (Eucalyptus or OpenStack) or at Amazon in the public Cloud or govCloud. In this way, years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the transferred data. We are using HySDS to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a MEASURES grant. We will present the architecture of HySDS, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. Our system demonstrates how one can pull A-Train variables (Levels 2 & 3) on-demand into the Amazon Cloud, and cache only those variables that are heavily used, so that any number of compute jobs can be

  19. Understanding extreme sea levels for coastal impact and adaptation analysis

    Science.gov (United States)

    Wahl, T.; Haigh, I. D.; Nicholls, R. J.; Arns, A.; Hinkel, J.; Dangendorf, S.; Slangen, A.

    2016-12-01

    Coastal impact and adaptation assessments require detailed knowledge on extreme sea levels, because increasing damage due to extreme events, such as storm surges and tropical cyclones, is one of the major consequences of sea level rise and climate change. In fact, the IPCC has highlighted in its AR4 report that "societal impacts of sea level change primarily occur via the extreme levels rather than as a direct consequence of mean sea level changes". Over the last few decades, substantial research efforts have been directed towards improved understanding of past and future mean sea level; different scenarios were developed with process-based or semi-empirical models and used for coastal impact assessments at various spatial scales to guide coastal management and adaptation efforts. The uncertainties in future sea level rise are typically accounted for by analyzing the impacts associated with a range of scenarios leading to a vertical displacement of the distribution of extreme sea-levels. And indeed most regional and global studies find little or no evidence for changes in storminess with climate change, although there is still low confidence in the results. However, and much more importantly, there is still a limited understanding of present-day extreme sea-levels which is largely ignored in most impact and adaptation analyses. The two key uncertainties stem from: (1) numerical models that are used to generate long time series of extreme sea-levels. The bias of these models varies spatially and can reach values much larger than the expected sea level rise; but it can be accounted for in most regions making use of in-situ measurements; (2) Statistical models used for determining present-day extreme sea-level exceedance probabilities. There is no universally accepted approach to obtain such values for flood risk assessments and while substantial research has explored inter-model uncertainties for mean sea level, we explore here, for the first time, inter

  20. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Miscellaneous -- Volume 3, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Petrie, L.M.; Jordon, W.C. [Oak Ridge National Lab., TN (United States); Edwards, A.L. [Oak Ridge National Lab., TN (United States)]|[Lawrence Livermore National Lab., CA (United States)] [and others

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice; (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System developments has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3--for the data libraries and subroutine libraries.

  1. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Miscellaneous -- Volume 3, Revision 4

    International Nuclear Information System (INIS)

    Petrie, L.M.; Jordon, W.C.; Edwards, A.L.

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice; (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System developments has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3--for the data libraries and subroutine libraries

  2. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  3. A method of validating climate models in climate research with a view to extreme events; Eine Methode zur Validierung von Klimamodellen fuer die Klimawirkungsforschung hinsichtlich der Wiedergabe extremer Ereignisse

    Energy Technology Data Exchange (ETDEWEB)

    Boehm, U

    2000-08-01

    A method is presented to validate climate models with respect to extreme events which are suitable for risk assessment in impact modeling. The algorithm is intended to complement conventional techniques. These procedures mainly compare simulation results with reference data based on single or only a few climatic variables at the same time under the aspect how well a model performs in reproducing the known physical processes of the atmosphere. Such investigations are often based on seasonal or annual mean values. For impact research, however, extreme climatic conditions with shorter typical time scales are generally more interesting. Furthermore, such extreme events are frequently characterized by combinations of individual extremes which require a multivariate approach. The validation method presented here basically consists of a combination of several well-known statistical techniques, completed by a newly developed diagnosis module to quantify model deficiencies. First of all, critical threshold values of key climatic variables for impact research have to be derived serving as criteria to define extreme conditions for a specific activity. Unlike in other techniques, the simulation results to be validated are interpolated to the reference data sampling points in the initial step of this new technique. Besides that fact that the same spatial representation is provided in this way in both data sets for the next diagnostic steps, this procedure also enables to leave the reference basis unchanged for any type of model output and to perform the validation on a real orography. To simultaneously identify the spatial characteristics of a given situation regarding all considered extreme value criteria, a multivariate cluster analysis method for pattern recognition is separately applied to both simulation results and reference data. Afterwards, various distribution-free statistical tests are applied depending on the specific situation to detect statistical significant

  4. A method of validating climate models in climate research with a view to extreme events; Eine Methode zur Validierung von Klimamodellen fuer die Klimawirkungsforschung hinsichtlich der Wiedergabe extremer Ereignisse

    Energy Technology Data Exchange (ETDEWEB)

    Boehm, U.

    2000-08-01

    A method is presented to validate climate models with respect to extreme events which are suitable for risk assessment in impact modeling. The algorithm is intended to complement conventional techniques. These procedures mainly compare simulation results with reference data based on single or only a few climatic variables at the same time under the aspect how well a model performs in reproducing the known physical processes of the atmosphere. Such investigations are often based on seasonal or annual mean values. For impact research, however, extreme climatic conditions with shorter typical time scales are generally more interesting. Furthermore, such extreme events are frequently characterized by combinations of individual extremes which require a multivariate approach. The validation method presented here basically consists of a combination of several well-known statistical techniques, completed by a newly developed diagnosis module to quantify model deficiencies. First of all, critical threshold values of key climatic variables for impact research have to be derived serving as criteria to define extreme conditions for a specific activity. Unlike in other techniques, the simulation results to be validated are interpolated to the reference data sampling points in the initial step of this new technique. Besides that fact that the same spatial representation is provided in this way in both data sets for the next diagnostic steps, this procedure also enables to leave the reference basis unchanged for any type of model output and to perform the validation on a real orography. To simultaneously identify the spatial characteristics of a given situation regarding all considered extreme value criteria, a multivariate cluster analysis method for pattern recognition is separately applied to both simulation results and reference data. Afterwards, various distribution-free statistical tests are applied depending on the specific situation to detect statistical significant

  5. Cherenkov radiation effects on counting efficiency in extremely quenched liquid scintillation samples

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau Malonda, A.; Rodriguez Barquero, L.

    1993-01-01

    The CIEMAT/NIST tracer method has successfully standardized nuclides with diverse quench values and decay schemes in liquid scintillation counting. However, the counting efficiency is computed inaccurately for extremely quenched samples. This article shows that when samples are extremely quenched, the counting efficiency in high-energy beta-ray nuclides depends principally on the Cherenkov effect. A new technique is described for quench determination, which makes the measurement of counting efficiency possible when scintillation counting approaches zero. A new efficiency computation model for pure beta-ray nuclides is also described. The results of the model are tested experimentally for 89 Sr, 90 Y, 36 Cl and 204 Tl nuclides with independence of the quench level. (orig.)

  6. Estimating extreme river discharges in Europe through a Bayesian network

    Science.gov (United States)

    Paprotny, Dominik; Morales-Nápoles, Oswaldo

    2017-06-01

    Large-scale hydrological modelling of flood hazards requires adequate extreme discharge data. In practise, models based on physics are applied alongside those utilizing only statistical analysis. The former require enormous computational power, while the latter are mostly limited in accuracy and spatial coverage. In this paper we introduce an alternate, statistical approach based on Bayesian networks (BNs), a graphical model for dependent random variables. We use a non-parametric BN to describe the joint distribution of extreme discharges in European rivers and variables representing the geographical characteristics of their catchments. Annual maxima of daily discharges from more than 1800 river gauges (stations with catchment areas ranging from 1.4 to 807 000 km2) were collected, together with information on terrain, land use and local climate. The (conditional) correlations between the variables are modelled through copulas, with the dependency structure defined in the network. The results show that using this method, mean annual maxima and return periods of discharges could be estimated with an accuracy similar to existing studies using physical models for Europe and better than a comparable global statistical model. Performance of the model varies slightly between regions of Europe, but is consistent between different time periods, and remains the same in a split-sample validation. Though discharge prediction under climate change is not the main scope of this paper, the BN was applied to a large domain covering all sizes of rivers in the continent both for present and future climate, as an example. Results show substantial variation in the influence of climate change on river discharges. The model can be used to provide quick estimates of extreme discharges at any location for the purpose of obtaining input information for hydraulic modelling.

  7. An assessment of future computer system needs for large-scale computation

    Science.gov (United States)

    Lykos, P.; White, J.

    1980-01-01

    Data ranging from specific computer capability requirements to opinions about the desirability of a national computer facility are summarized. It is concluded that considerable attention should be given to improving the user-machine interface. Otherwise, increased computer power may not improve the overall effectiveness of the machine user. Significant improvement in throughput requires highly concurrent systems plus the willingness of the user community to develop problem solutions for that kind of architecture. An unanticipated result was the expression of need for an on-going cross-disciplinary users group/forum in order to share experiences and to more effectively communicate needs to the manufacturers.

  8. Is Extremely High Life Satisfaction during Adolescence Advantageous?

    Science.gov (United States)

    Suldo, Shannon M.; Huebner, E. Scott

    2006-01-01

    This study examined whether extremely high life satisfaction was associated with adaptive functioning or maladaptive functioning. Six hundred ninety-eight secondary level students completed the Students' Life Satisfaction Scale [Huebner, 1991a, School Psychology International, 12, pp. 231-240], Youth Self-Report of the Child Behavior Checklist…

  9. Extreme Winds from the NCEP/NCAR Reanalysis Data

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Mann, Jakob

    2009-01-01

    wind. We examined extreme winds in different places where the strongest wind events are weather phenomena of different scales, including the mid-latitude lows in Denmark, channelling winds in the Gulf of Suez, typhoons in the western North Pacific, cyclones in the Caribbean Sea, local strong winds...

  10. Elastic Spatial Query Processing in OpenStack Cloud Computing Environment for Time-Constraint Data Analysis

    Directory of Open Access Journals (Sweden)

    Wei Huang

    2017-03-01

    Full Text Available Geospatial big data analysis (GBDA is extremely significant for time-constraint applications such as disaster response. However, the time-constraint analysis is not yet a trivial task in the cloud computing environment. Spatial query processing (SQP is typical computation-intensive and indispensable for GBDA, and the spatial range query, join query, and the nearest neighbor query algorithms are not scalable without using MapReduce-liked frameworks. Parallel SQP algorithms (PSQPAs are trapped in screw-processing, which is a known issue in Geoscience. To satisfy time-constrained GBDA, we propose an elastic SQP approach in this paper. First, Spark is used to implement PSQPAs. Second, Kubernetes-managed Core Operation System (CoreOS clusters provide self-healing Docker containers for running Spark clusters in the cloud. Spark-based PSQPAs are submitted to Docker containers, where Spark master instances reside. Finally, the horizontal pod auto-scaler (HPA would scale-out and scale-in Docker containers for supporting on-demand computing resources. Combined with an auto-scaling group of virtual instances, HPA helps to find each of the five nearest neighbors for 46,139,532 query objects from 834,158 spatial data objects in less than 300 s. The experiments conducted on an OpenStack cloud demonstrate that auto-scaling containers can satisfy time-constraint GBDA in clouds.

  11. An efficient and novel computation method for simulating diffraction patterns from large-scale coded apertures on large-scale focal plane arrays

    Science.gov (United States)

    Shrekenhamer, Abraham; Gottesman, Stephen R.

    2012-10-01

    A novel and memory efficient method for computing diffraction patterns produced on large-scale focal planes by largescale Coded Apertures at wavelengths where diffraction effects are significant has been developed and tested. The scheme, readily implementable on portable computers, overcomes the memory limitations of present state-of-the-art simulation codes such as Zemax. The method consists of first calculating a set of reference complex field (amplitude and phase) patterns on the focal plane produced by a single (reference) central hole, extending to twice the focal plane array size, with one such pattern for each Line-of-Sight (LOS) direction and wavelength in the scene, and with the pattern amplitude corresponding to the square-root of the spectral irradiance from each such LOS direction in the scene at selected wavelengths. Next the set of reference patterns is transformed to generate pattern sets for other holes. The transformation consists of a translational pattern shift corresponding to each hole's position offset and an electrical phase shift corresponding to each hole's position offset and incoming radiance's direction and wavelength. The set of complex patterns for each direction and wavelength is then summed coherently and squared for each detector to yield a set of power patterns unique for each direction and wavelength. Finally the set of power patterns is summed to produce the full waveband diffraction pattern from the scene. With this tool researchers can now efficiently simulate diffraction patterns produced from scenes by large-scale Coded Apertures onto large-scale focal plane arrays to support the development and optimization of coded aperture masks and image reconstruction algorithms.

  12. Body composition of the human lower extremity observed by computed tomography

    International Nuclear Information System (INIS)

    Suzuki, Masataka; Hasegawa, Makiko; Wu, Chung-Lei; Mimaru, Osamu

    1987-01-01

    Using computed tomography image, the body composition on the lower extremity were observed in 24 adult human (10 male, 14 female). CT image were taken at proximal section (upper a third on thigh), distal section (lower a third on thigh) and leg section (upper a third on leg), and the quantities determind from the images included the area of total cross-section, muscle, subcutaneous fat, connective tissue and bone in the each cross-section. The ratios of the each components to total area were surveyed. The age related changes and the differences between the three body types, which were defined by Rohrer's index, were discussed in both sexes. The following results were obtained. 1. The ratio of the each component to total sectional area in the three section levels was the highest in the muscle following in order of subcutaneous fat, connective tissue and bone in man generally. On the other hand, in female, the subcutaneous fat was higher than the muscle in the proximal section by A and C body types, but the muscle was higher than the subcutaneous fat by D body type in this section and by all body types in distal and leg sections. 2. Concerning the correlationship between the ratios of the components in the section and Rohrer's index or ages, they were in positive relation on the ratios of the subcutaneous fat and the connective tissue, and were in negative relation on the ratio of the muscle in the femoral section by male. 3. Decreasing with age of muscular area were found at under 50 ages in extensor, at 50 age in adductor and at about 60 ages in flexor on the proximal section, and at 50 age in extensor, after 55 age in adductor and at about 60 age in flexor on the distal section in man respectively. On the leg section, the decreasing tendency with ages were predominant in flexor by man and were found after 50 age by female too. (author)

  13. Brownian gas models for extreme-value laws

    International Nuclear Information System (INIS)

    Eliazar, Iddo

    2013-01-01

    In this paper we establish one-dimensional Brownian gas models for the extreme-value laws of Gumbel, Weibull, and Fréchet. A gas model is a countable collection of independent particles governed by common diffusion dynamics. The extreme-value laws are the universal probability distributions governing the affine scaling limits of the maxima and minima of ensembles of independent and identically distributed one-dimensional random variables. Using the recently introduced concept of stationary Poissonian intensities, we construct two gas models whose global statistical structures are stationary, and yield the extreme-value laws: a linear Brownian motion gas model for the Gumbel law, and a geometric Brownian motion gas model for the Weibull and Fréchet laws. The stochastic dynamics of these gas models are studied in detail, and closed-form analytical descriptions of their temporal correlation structures, their topological phase transitions, and their intrinsic first-passage-time fluxes are presented. (paper)

  14. Communicating Climate Uncertainties: Challenges and Opportunities Related to Spatial Scales, Extreme Events, and the Warming 'Hiatus'

    Science.gov (United States)

    Casola, J. H.; Huber, D.

    2013-12-01

    Many media, academic, government, and advocacy organizations have achieved sophistication in developing effective messages based on scientific information, and can quickly translate salient aspects of emerging climate research and evolving observations. However, there are several ways in which valid messages can be misconstrued by decision makers, leading them to inaccurate conclusions about the risks associated with climate impacts. Three cases will be discussed: 1) Issues of spatial scale in interpreting climate observations: Local climate observations may contradict summary statements about the effects of climate change on larger regional or global spatial scales. Effectively addressing these differences often requires communicators to understand local and regional climate drivers, and the distinction between a 'signal' associated with climate change and local climate 'noise.' Hydrological statistics in Missouri and California are shown to illustrate this case. 2) Issues of complexity related to extreme events: Climate change is typically invoked following a wide range of damaging meteorological events (e.g., heat waves, landfalling hurricanes, tornadoes), regardless of the strength of the relationship between anthropogenic climate change and the frequency or severity of that type of event. Examples are drawn from media coverage of several recent events, contrasting useful and potentially confusing word choices and frames. 3) Issues revolving around climate sensitivity: The so-called 'pause' or 'hiatus' in global warming has reverberated strongly through political and business discussions of climate change. Addressing the recent slowdown in warming yields an important opportunity to raise climate literacy in these communities. Attempts to use recent observations as a wedge between climate 'believers' and 'deniers' is likely to be counterproductive. Examples are drawn from Congressional testimony and media stories. All three cases illustrate ways that decision

  15. The extremity function index (EFI), a disability severity measure for neuromuscular diseases : psychometric evaluation

    NARCIS (Netherlands)

    Bos, Isaac; Wynia, Klaske; Drost, Gea; Almansa, Josué; Kuks, Joannes

    2017-01-01

    OBJECTIVE: To adapt and to combine the self-report Upper Extremity Functional Index and Lower Extremity Function Scale, for the assessment of disability severity in patients with a neuromuscular disease and to examine its psychometric properties in order to make it suitable for indicating disease

  16. Observation of gravity waves during the extreme tornado outbreak of 3 April 1974

    Science.gov (United States)

    Hung, R. J.; Phan, T.; Smith, R. E.

    1978-01-01

    A continuous wave-spectrum high-frequency radiowave Doppler sounder array was used to observe upper-atmospheric disturbances during an extreme tornado outbreak. The observations indicated that gravity waves with two harmonic wave periods were detected at the F-region ionospheric height. Using a group ray path computational technique, the observed gravity waves were traced in order to locate potential sources. The signals were apparently excited 1-3 hours before tornado touchdown. Reverse ray tracing indicated that the wave source was located at the aurora zone with a Kp index of 6 at the time of wave excitation. The summation of the 24-hour Kp index for the day was 36. The results agree with existing theories (Testud, 1970; Titheridge, 1971; Kato, 1976) for the excitation of large-scale traveling ionospheric disturbances associated with geomagnetic activity in the aurora zone.

  17. Contribution of large-scale circulation anomalies to changes in extreme precipitation frequency in the United States

    Science.gov (United States)

    Lejiang Yu; Shiyuan Zhong; Lisi Pei; Xindi (Randy) Bian; Warren E. Heilman

    2016-01-01

    The mean global climate has warmed as a result of the increasing emission of greenhouse gases induced by human activities. This warming is considered the main reason for the increasing number of extreme precipitation events in the US. While much attention has been given to extreme precipitation events occurring over several days, which are usually responsible for...

  18. STATISTICAL STUDY OF STRONG AND EXTREME GEOMAGNETIC DISTURBANCES AND SOLAR CYCLE CHARACTERISTICS

    International Nuclear Information System (INIS)

    Kilpua, E. K. J.; Olspert, N.; Grigorievskiy, A.; Käpylä, M. J.; Tanskanen, E. I.; Pelt, J.; Miyahara, H.; Kataoka, R.; Liu, Y. D.

    2015-01-01

    We study the relation between strong and extreme geomagnetic storms and solar cycle characteristics. The analysis uses an extensive geomagnetic index AA data set spanning over 150 yr complemented by the Kakioka magnetometer recordings. We apply Pearson correlation statistics and estimate the significance of the correlation with a bootstrapping technique. We show that the correlation between the storm occurrence and the strength of the solar cycle decreases from a clear positive correlation with increasing storm magnitude toward a negligible relationship. Hence, the quieter Sun can also launch superstorms that may lead to significant societal and economic impact. Our results show that while weaker storms occur most frequently in the declining phase, the stronger storms have the tendency to occur near solar maximum. Our analysis suggests that the most extreme solar eruptions do not have a direct connection between the solar large-scale dynamo-generated magnetic field, but are rather associated with smaller-scale dynamo and resulting turbulent magnetic fields. The phase distributions of sunspots and storms becoming increasingly in phase with increasing storm strength, on the other hand, may indicate that the extreme storms are related to the toroidal component of the solar large-scale field

  19. STATISTICAL STUDY OF STRONG AND EXTREME GEOMAGNETIC DISTURBANCES AND SOLAR CYCLE CHARACTERISTICS

    Energy Technology Data Exchange (ETDEWEB)

    Kilpua, E. K. J. [Department of Physics, University Helsinki (Finland); Olspert, N.; Grigorievskiy, A.; Käpylä, M. J.; Tanskanen, E. I.; Pelt, J. [ReSoLVE Centre of Excellence, Department of Computer Science, P.O. Box 15400, FI-00076 Aalto Univeristy (Finland); Miyahara, H. [Musashino Art University, 1-736 Ogawa-cho, Kodaira-shi, Tokyo 187-8505 (Japan); Kataoka, R. [National Institute of Polar Research, 10-3 Midori-cho, Tachikawa, Tokyo 190-8518 (Japan); Liu, Y. D. [State Key Laboratory of Space Weather, National Space Science Center, Chinese Academy of Sciences, Beijing 100190 (China)

    2015-06-20

    We study the relation between strong and extreme geomagnetic storms and solar cycle characteristics. The analysis uses an extensive geomagnetic index AA data set spanning over 150 yr complemented by the Kakioka magnetometer recordings. We apply Pearson correlation statistics and estimate the significance of the correlation with a bootstrapping technique. We show that the correlation between the storm occurrence and the strength of the solar cycle decreases from a clear positive correlation with increasing storm magnitude toward a negligible relationship. Hence, the quieter Sun can also launch superstorms that may lead to significant societal and economic impact. Our results show that while weaker storms occur most frequently in the declining phase, the stronger storms have the tendency to occur near solar maximum. Our analysis suggests that the most extreme solar eruptions do not have a direct connection between the solar large-scale dynamo-generated magnetic field, but are rather associated with smaller-scale dynamo and resulting turbulent magnetic fields. The phase distributions of sunspots and storms becoming increasingly in phase with increasing storm strength, on the other hand, may indicate that the extreme storms are related to the toroidal component of the solar large-scale field.

  20. Scaling law for noise variance and spatial resolution in differential phase contrast computed tomography

    International Nuclear Information System (INIS)

    Chen Guanghong; Zambelli, Joseph; Li Ke; Bevins, Nicholas; Qi Zhihua

    2011-01-01

    Purpose: The noise variance versus spatial resolution relationship in differential phase contrast (DPC) projection imaging and computed tomography (CT) are derived and compared to conventional absorption-based x-ray projection imaging and CT. Methods: The scaling law for DPC-CT is theoretically derived and subsequently validated with phantom results from an experimental Talbot-Lau interferometer system. Results: For the DPC imaging method, the noise variance in the differential projection images follows the same inverse-square law with spatial resolution as in conventional absorption-based x-ray imaging projections. However, both in theory and experimental results, in DPC-CT the noise variance scales with spatial resolution following an inverse linear relationship with fixed slice thickness. Conclusions: The scaling law in DPC-CT implies a lesser noise, and therefore dose, penalty for moving to higher spatial resolutions when compared to conventional absorption-based CT in order to maintain the same contrast-to-noise ratio.

  1. Quantum Computing for Computer Architects

    CERN Document Server

    Metodi, Tzvetan

    2011-01-01

    Quantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm. While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore

  2. Workshop report on large-scale matrix diagonalization methods in chemistry theory institute

    Energy Technology Data Exchange (ETDEWEB)

    Bischof, C.H.; Shepard, R.L.; Huss-Lederman, S. [eds.

    1996-10-01

    The Large-Scale Matrix Diagonalization Methods in Chemistry theory institute brought together 41 computational chemists and numerical analysts. The goal was to understand the needs of the computational chemistry community in problems that utilize matrix diagonalization techniques. This was accomplished by reviewing the current state of the art and looking toward future directions in matrix diagonalization techniques. This institute occurred about 20 years after a related meeting of similar size. During those 20 years the Davidson method continued to dominate the problem of finding a few extremal eigenvalues for many computational chemistry problems. Work on non-diagonally dominant and non-Hermitian problems as well as parallel computing has also brought new methods to bear. The changes and similarities in problems and methods over the past two decades offered an interesting viewpoint for the success in this area. One important area covered by the talks was overviews of the source and nature of the chemistry problems. The numerical analysts were uniformly grateful for the efforts to convey a better understanding of the problems and issues faced in computational chemistry. An important outcome was an understanding of the wide range of eigenproblems encountered in computational chemistry. The workshop covered problems involving self- consistent-field (SCF), configuration interaction (CI), intramolecular vibrational relaxation (IVR), and scattering problems. In atomic structure calculations using the Hartree-Fock method (SCF), the symmetric matrices can range from order hundreds to thousands. These matrices often include large clusters of eigenvalues which can be as much as 25% of the spectrum. However, if Cl methods are also used, the matrix size can be between 10{sup 4} and 10{sup 9} where only one or a few extremal eigenvalues and eigenvectors are needed. Working with very large matrices has lead to the development of

  3. Computational Techniques for Model Predictive Control of Large-Scale Systems with Continuous-Valued and Discrete-Valued Inputs

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2013-01-01

    Full Text Available We propose computational techniques for model predictive control of large-scale systems with both continuous-valued control inputs and discrete-valued control inputs, which are a class of hybrid systems. In the proposed method, we introduce the notion of virtual control inputs, which are obtained by relaxing discrete-valued control inputs to continuous variables. In online computation, first, we find continuous-valued control inputs and virtual control inputs minimizing a cost function. Next, using the obtained virtual control inputs, only discrete-valued control inputs at the current time are computed in each subsystem. In addition, we also discuss the effect of quantization errors. Finally, the effectiveness of the proposed method is shown by a numerical example. The proposed method enables us to reduce and decentralize the computation load.

  4. Suggested Approaches to the Measurement of Computer Anxiety.

    Science.gov (United States)

    Toris, Carol

    Psychologists can gain insight into human behavior by examining what people feel about, know about, and do with, computers. Two extreme reactions to computers are computer phobia, or anxiety, and computer addiction, or "hacking". A four-part questionnaire was developed to measure computer anxiety. The first part is a projective technique which…

  5. Multi-scale and multi-domain computational astrophysics.

    Science.gov (United States)

    van Elteren, Arjen; Pelupessy, Inti; Zwart, Simon Portegies

    2014-08-06

    Astronomical phenomena are governed by processes on all spatial and temporal scales, ranging from days to the age of the Universe (13.8 Gyr) as well as from kilometre size up to the size of the Universe. This enormous range in scales is contrived, but as long as there is a physical connection between the smallest and largest scales it is important to be able to resolve them all, and for the study of many astronomical phenomena this governance is present. Although covering all these scales is a challenge for numerical modellers, the most challenging aspect is the equally broad and complex range in physics, and the way in which these processes propagate through all scales. In our recent effort to cover all scales and all relevant physical processes on these scales, we have designed the Astrophysics Multipurpose Software Environment (AMUSE). AMUSE is a Python-based framework with production quality community codes and provides a specialized environment to connect this plethora of solvers to a homogeneous problem-solving environment. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  6. Vertical structure of extreme currents in the Faroe-Bank Channel

    Directory of Open Access Journals (Sweden)

    C. Carollo

    2005-09-01

    Full Text Available Extreme currents are studied with the aim of understanding their vertical and spatial structures in the Faroe-Bank Channel. Acoustic Doppler Current Profiler time series recorded in 3 deployments in this channel were investigated. To understand the main features of extreme events, the measurements were separated into their components through filtering and tidal analysis before applying the extreme value theory to the surge component. The Generalized Extreme Value (GEV distribution and the Generalized Pareto Distribution (GPD were used to study the variation of surge extremes from near-surface to deep waters. It was found that this component alone is not able to explain the extremes measured in total currents, particularly below 500 m. Here the mean residual flow enhanced by tidal rectification was found to be the component feature dominating extremes. Therefore, it must be taken into consideration when applying the extreme value theory, not to underestimate the return level for total currents. Return value speeds up to 250 cm s–1 for 50/250 years return period were found for deep waters, where the flow is constrained by the topography at bearings near 300/330° It is also found that the UK Meteorological Office FOAM model is unable to reproduce either the magnitude or the form for the extremes, perhaps due to its coarse vertical and horizontal resolution, and is thus not suitable to model extremes on a regional scale. Keywords. Oceanography: Physical (Currents; General circulation; General or miscellaneous

  7. Identifying all moiety conservation laws in genome-scale metabolic networks.

    Science.gov (United States)

    De Martino, Andrea; De Martino, Daniele; Mulet, Roberto; Pagnani, Andrea

    2014-01-01

    The stoichiometry of a metabolic network gives rise to a set of conservation laws for the aggregate level of specific pools of metabolites, which, on one hand, pose dynamical constraints that cross-link the variations of metabolite concentrations and, on the other, provide key insight into a cell's metabolic production capabilities. When the conserved quantity identifies with a chemical moiety, extracting all such conservation laws from the stoichiometry amounts to finding all non-negative integer solutions of a linear system, a programming problem known to be NP-hard. We present an efficient strategy to compute the complete set of integer conservation laws of a genome-scale stoichiometric matrix, also providing a certificate for correctness and maximality of the solution. Our method is deployed for the analysis of moiety conservation relationships in two large-scale reconstructions of the metabolism of the bacterium E. coli, in six tissue-specific human metabolic networks, and, finally, in the human reactome as a whole, revealing that bacterial metabolism could be evolutionarily designed to cover broader production spectra than human metabolism. Convergence to the full set of moiety conservation laws in each case is achieved in extremely reduced computing times. In addition, we uncover a scaling relation that links the size of the independent pool basis to the number of metabolites, for which we present an analytical explanation.

  8. Identifying all moiety conservation laws in genome-scale metabolic networks.

    Directory of Open Access Journals (Sweden)

    Andrea De Martino

    Full Text Available The stoichiometry of a metabolic network gives rise to a set of conservation laws for the aggregate level of specific pools of metabolites, which, on one hand, pose dynamical constraints that cross-link the variations of metabolite concentrations and, on the other, provide key insight into a cell's metabolic production capabilities. When the conserved quantity identifies with a chemical moiety, extracting all such conservation laws from the stoichiometry amounts to finding all non-negative integer solutions of a linear system, a programming problem known to be NP-hard. We present an efficient strategy to compute the complete set of integer conservation laws of a genome-scale stoichiometric matrix, also providing a certificate for correctness and maximality of the solution. Our method is deployed for the analysis of moiety conservation relationships in two large-scale reconstructions of the metabolism of the bacterium E. coli, in six tissue-specific human metabolic networks, and, finally, in the human reactome as a whole, revealing that bacterial metabolism could be evolutionarily designed to cover broader production spectra than human metabolism. Convergence to the full set of moiety conservation laws in each case is achieved in extremely reduced computing times. In addition, we uncover a scaling relation that links the size of the independent pool basis to the number of metabolites, for which we present an analytical explanation.

  9. Risk factors for neck and upper extremity disorders among computers users and the effect of interventions: an overview of systematic reviews.

    Science.gov (United States)

    Andersen, Johan H; Fallentin, Nils; Thomsen, Jane F; Mikkelsen, Sigurd

    2011-05-12

    To summarize systematic reviews that 1) assessed the evidence for causal relationships between computer work and the occurrence of carpal tunnel syndrome (CTS) or upper extremity musculoskeletal disorders (UEMSDs), or 2) reported on intervention studies among computer users/or office workers. PubMed, Embase, CINAHL and Web of Science were searched for reviews published between 1999 and 2010. Additional publications were provided by content area experts. The primary author extracted all data using a purpose-built form, while two of the authors evaluated the quality of the reviews using recommended standard criteria from AMSTAR; disagreements were resolved by discussion. The quality of evidence syntheses in the included reviews was assessed qualitatively for each outcome and for the interventions. Altogether, 1,349 review titles were identified, 47 reviews were retrieved for full text relevance assessment, and 17 reviews were finally included as being relevant and of sufficient quality. The degrees of focus and rigorousness of these 17 reviews were highly variable. Three reviews on risk factors for carpal tunnel syndrome were rated moderate to high quality, 8 reviews on risk factors for UEMSDs ranged from low to moderate/high quality, and 6 reviews on intervention studies were of moderate to high quality. The quality of the evidence for computer use as a risk factor for CTS was insufficient, while the evidence for computer use and UEMSDs was moderate regarding pain complaints and limited for specific musculoskeletal disorders. From the reviews on intervention studies no strong evidence based recommendations could be given. Computer use is associated with pain complaints, but it is still not very clear if this association is causal. The evidence for specific disorders or diseases is limited. No effective interventions have yet been documented.

  10. Risk factors for neck and upper extremity disorders among computers users and the effect of interventions: an overview of systematic reviews.

    Directory of Open Access Journals (Sweden)

    Johan H Andersen

    Full Text Available BACKGROUND: To summarize systematic reviews that 1 assessed the evidence for causal relationships between computer work and the occurrence of carpal tunnel syndrome (CTS or upper extremity musculoskeletal disorders (UEMSDs, or 2 reported on intervention studies among computer users/or office workers. METHODOLOGY/PRINCIPAL FINDINGS: PubMed, Embase, CINAHL and Web of Science were searched for reviews published between 1999 and 2010. Additional publications were provided by content area experts. The primary author extracted all data using a purpose-built form, while two of the authors evaluated the quality of the reviews using recommended standard criteria from AMSTAR; disagreements were resolved by discussion. The quality of evidence syntheses in the included reviews was assessed qualitatively for each outcome and for the interventions. Altogether, 1,349 review titles were identified, 47 reviews were retrieved for full text relevance assessment, and 17 reviews were finally included as being relevant and of sufficient quality. The degrees of focus and rigorousness of these 17 reviews were highly variable. Three reviews on risk factors for carpal tunnel syndrome were rated moderate to high quality, 8 reviews on risk factors for UEMSDs ranged from low to moderate/high quality, and 6 reviews on intervention studies were of moderate to high quality. The quality of the evidence for computer use as a risk factor for CTS was insufficient, while the evidence for computer use and UEMSDs was moderate regarding pain complaints and limited for specific musculoskeletal disorders. From the reviews on intervention studies no strong evidence based recommendations could be given. CONCLUSIONS/SIGNIFICANCE: Computer use is associated with pain complaints, but it is still not very clear if this association is causal. The evidence for specific disorders or diseases is limited. No effective interventions have yet been documented.

  11. Multi-catchment rainfall-runoff simulation for extreme flood estimation

    Science.gov (United States)

    Paquet, Emmanuel

    2017-04-01

    The SCHADEX method (Paquet et al., 2013) is a reference method in France for the estimation of extreme flood for dam design. The method is based on a semi-continuous rainfall-runoff simulation process: hundreds of different rainy events, randomly drawn up to extreme values, are simulated independently in the hydrological conditions of each day when a rainy event has been actually observed. This allows generating an exhaustive set of crossings between precipitation and soil saturation hazards, and to build a complete distribution of flood discharges up to extreme quantiles. The hydrological model used within SCHADEX, the MORDOR model (Garçon, 1996), is a lumped model, which implies that hydrological processes, e.g. rainfall and soil saturation, are supposed to be homogeneous throughout the catchment. Snow processes are nevertheless represented in relation with altitude. This hypothesis of homogeneity is questionable especially as the size of the catchment increases, or in areas of highly contrasted climatology (like mountainous areas). Conversely, modeling the catchment with a fully distributed approach would cause different problems, in particular distributing the rainfall-runoff model parameters trough space, and within the SCHADEX stochastic framework, generating extreme rain fields with credible spatio-temporal features. An intermediate solution is presented here. It provides a better representation of the hydro-climatic diversity of the studied catchment (especially regarding flood processes) while keeping the SCHADEX simulation framework. It consists in dividing the catchment in several, more homogeneous sub-catchments. Rainfall-runoff models are parameterized individually for each of them, using local discharge data if available. A first SCHADEX simulation is done at the global scale, which allows assigning a probability to each simulated event, mainly based on the global areal rainfall drawn for the event (see Paquet el al., 2013 for details). Then the

  12. Statistical Multiplexing of Computations in C-RAN with Tradeoffs in Latency and Energy

    DEFF Research Database (Denmark)

    Kalør, Anders Ellersgaard; Agurto Agurto, Mauricio Ignacio; Pratas, Nuno

    2017-01-01

    frame duration, then this may result in additional access latency and limit the energy savings. In this paper we investigate the tradeoff by considering two extreme time-scales for the resource multiplexing: (i) long-term, where the computational resources are adapted over periods much larger than...... the access frame durations; (ii) short-term, where the adaption is below the access frame duration.We develop a general C-RAN queuing model that models the access latency and show, for Poisson arrivals, that long-term multiplexing achieves savings comparable to short-term multiplexing, while offering low...

  13. Tutorial - applying extreme value theory to characterize food-processing systems

    DEFF Research Database (Denmark)

    Skou, Peter Bæk; Holroyd, Stephen E.; van der Berg, Franciscus Winfried J

    2017-01-01

    This tutorial presents extreme value theory (EVT) as an analytical tool in process characterization and shows its potential to describe production performance, eg, across different factories, via reliable estimates of the frequency and scale of extreme events. Two alternative EVT methods...... are discussed: point over threshold and block maxima. We illustrate the theoretical framework for EVT by process data from two different examples from the food-processing industry. Finally, we discuss limitations, decisions, and possibilities when applying EVT for process data....

  14. Evaluating sub-seasonal skill in probabilistic forecasts of Atmospheric Rivers and associated extreme events

    Science.gov (United States)

    Subramanian, A. C.; Lavers, D.; Matsueda, M.; Shukla, S.; Cayan, D. R.; Ralph, M.

    2017-12-01

    Atmospheric rivers (ARs) - elongated plumes of intense moisture transport - are a primary source of hydrological extremes, water resources and impactful weather along the West Coast of North America and Europe. There is strong demand in the water management, societal infrastructure and humanitarian sectors for reliable sub-seasonal forecasts, particularly of extreme events, such as floods and droughts so that actions to mitigate disastrous impacts can be taken with sufficient lead-time. Many recent studies have shown that ARs in the Pacific and the Atlantic are modulated by large-scale modes of climate variability. Leveraging the improved understanding of how these large-scale climate modes modulate the ARs in these two basins, we use the state-of-the-art multi-model forecast systems such as the North American Multi-Model Ensemble (NMME) and the Subseasonal-to-Seasonal (S2S) database to help inform and assess the probabilistic prediction of ARs and related extreme weather events over the North American and European West Coasts. We will present results from evaluating probabilistic forecasts of extreme precipitation and AR activity at the sub-seasonal scale. In particular, results from the comparison of two winters (2015-16 and 2016-17) will be shown, winters which defied canonical El Niño teleconnection patterns over North America and Europe. We further extend this study to analyze probabilistic forecast skill of AR events in these two basins and the variability in forecast skill during certain regimes of large-scale climate modes.

  15. 21st Century Changes in Precipitation Extremes Based on Resolved Atmospheric Patterns

    Science.gov (United States)

    Gao, X.; Schlosser, C. A.; O'Gorman, P. A.; Monier, E.

    2014-12-01

    Global warming is expected to alter the frequency and/or magnitude of extreme precipitation events. Such changes could have substantial ecological, economic, and sociological consequences. However, climate models in general do not correctly reproduce the frequency distribution of precipitation, especially at the regional scale. In this study, a validated analogue method is employed to diagnose the potential future shifts in the probability of extreme precipitation over the United States under global warming. The method is based on the use of the resolved large-scale meteorological conditions (i.e. flow features, moisture supply) to detect the occurrence of extreme precipitation. The CMIP5 multi-model projections have been compiled for two radiative forcing scenarios (Representative Concentration Pathways 4.5 and 8.5). We further analyze the accompanying circulation features and their changes that may be responsible for shifts in extreme precipitation in response to changed climate. The application of such analogue method to detect other types of hazard events, i.e. landslides is also explored. The results from this study may guide hazardous weather watches and help society develop adaptive strategies for preventing catastrophic losses.

  16. Characterization of the Scale Model Acoustic Test Overpressure Environment using Computational Fluid Dynamics

    Science.gov (United States)

    Nielsen, Tanner; West, Jeff

    2015-01-01

    The Scale Model Acoustic Test (SMAT) is a 5% scale test of the Space Launch System (SLS), which is currently being designed at Marshall Space Flight Center (MSFC). The purpose of this test is to characterize and understand a variety of acoustic phenomena that occur during the early portions of lift off, one being the overpressure environment that develops shortly after booster ignition. The pressure waves that propagate from the mobile launcher (ML) exhaust hole are defined as the ignition overpressure (IOP), while the portion of the pressure waves that exit the duct or trench are the duct overpressure (DOP). Distinguishing the IOP and DOP in scale model test data has been difficult in past experiences and in early SMAT results, due to the effects of scaling the geometry. The speed of sound of the air and combustion gas constituents is not scaled, and therefore the SMAT pressure waves propagate at approximately the same speed as occurs in full scale. However, the SMAT geometry is twenty times smaller, allowing the pressure waves to move down the exhaust hole, through the trench and duct, and impact the vehicle model much faster than occurs at full scale. The DOP waves impact portions of the vehicle at the same time as the IOP waves, making it difficult to distinguish the different waves and fully understand the data. To better understand the SMAT data, a computational fluid dynamics (CFD) analysis was performed with a fictitious geometry that isolates the IOP and DOP. The upper and lower portions of the domain were segregated to accomplish the isolation in such a way that the flow physics were not significantly altered. The Loci/CHEM CFD software program was used to perform this analysis.

  17. A Multi-Scale Computational Study on the Mechanism of Streptococcus pneumoniae Nicotinamidase (SpNic)

    OpenAIRE

    Ion, Bogdan; Kazim, Erum; Gauld, James

    2014-01-01

    Nicotinamidase (Nic) is a key zinc-dependent enzyme in NAD metabolism that catalyzes the hydrolysis of nicotinamide to give nicotinic acid. A multi-scale computational approach has been used to investigate the catalytic mechanism, substrate binding and roles of active site residues of Nic from Streptococcus pneumoniae (SpNic). In particular, density functional theory (DFT), molecular dynamics (MD) and ONIOM quantum mechanics/molecular mechanics (QM/MM) methods have been employed. The o...

  18. Nanoelectromechanical Switches for Low-Power Digital Computing

    Directory of Open Access Journals (Sweden)

    Alexis Peschot

    2015-08-01

    Full Text Available The need for more energy-efficient solid-state switches beyond complementary metal-oxide-semiconductor (CMOS transistors has become a major concern as the power consumption of electronic integrated circuits (ICs steadily increases with technology scaling. Nano-Electro-Mechanical (NEM relays control current flow by nanometer-scale motion to make or break physical contact between electrodes, and offer advantages over transistors for low-power digital logic applications: virtually zero leakage current for negligible static power consumption; the ability to operate with very small voltage signals for low dynamic power consumption; and robustness against harsh environments such as extreme temperatures. Therefore, NEM logic switches (relays have been investigated by several research groups during the past decade. Circuit simulations calibrated to experimental data indicate that scaled relay technology can overcome the energy-efficiency limit of CMOS technology. This paper reviews recent progress toward this goal, providing an overview of the different relay designs and experimental results achieved by various research groups, as well as of relay-based IC design principles. Remaining challenges for realizing the promise of nano-mechanical computing, and ongoing efforts to address these, are discussed.

  19. Magnetic and velocity fields in a dynamo operating at extremely small Ekman and magnetic Prandtl numbers

    Science.gov (United States)

    Šimkanin, Ján; Kyselica, Juraj

    2017-12-01

    Numerical simulations of the geodynamo are becoming more realistic because of advances in computer technology. Here, the geodynamo model is investigated numerically at the extremely low Ekman and magnetic Prandtl numbers using the PARODY dynamo code. These parameters are more realistic than those used in previous numerical studies of the geodynamo. Our model is based on the Boussinesq approximation and the temperature gradient between upper and lower boundaries is a source of convection. This study attempts to answer the question how realistic the geodynamo models are. Numerical results show that our dynamo belongs to the strong-field dynamos. The generated magnetic field is dipolar and large-scale while convection is small-scale and sheet-like flows (plumes) are preferred to a columnar convection. Scales of magnetic and velocity fields are separated, which enables hydromagnetic dynamos to maintain the magnetic field at the low magnetic Prandtl numbers. The inner core rotation rate is lower than that in previous geodynamo models. On the other hand, dimensional magnitudes of velocity and magnetic fields and those of the magnetic and viscous dissipation are larger than those expected in the Earth's core due to our parameter range chosen.

  20. Psychology of computer use: XXIV. Computer-related stress among technical college students.

    Science.gov (United States)

    Ballance, C T; Rogers, S U

    1991-10-01

    Hudiburg's Computer Technology Hassles Scale, along with a measure of global stress and a scale on attitudes toward computers, were administered to 186 students in a two-year technical college. Hudiburg's work with the hassles scale as a measure of "technostress" was affirmed. Moderate, but statistically significant, correlations among the three scales are reported. No relationship between the hassles scale and achievement as measured by GPA was detected.

  1. Ages and Stages Questionnaire used to measure cognitive deficit in children born extremely preterm

    DEFF Research Database (Denmark)

    Klamer, Anja; Lando, Ane; Pinborg, Anja

    2005-01-01

    AIM: To validate the Ages and Stages Questionnaire (ASQ) and to measure average cognitive deficit in children born extremely preterm. METHODS: Parents of 30 term children aged 36-42 mo completed the ASQ and the children underwent the Wechsler Preschool and Primary Scales of Intelligence--Revised.......AIM: To validate the Ages and Stages Questionnaire (ASQ) and to measure average cognitive deficit in children born extremely preterm. METHODS: Parents of 30 term children aged 36-42 mo completed the ASQ and the children underwent the Wechsler Preschool and Primary Scales of Intelligence...

  2. Flood protection diversification to reduce probabilities of extreme losses.

    Science.gov (United States)

    Zhou, Qian; Lambert, James H; Karvetski, Christopher W; Keisler, Jeffrey M; Linkov, Igor

    2012-11-01

    Recent catastrophic losses because of floods require developing resilient approaches to flood risk protection. This article assesses how diversification of a system of coastal protections might decrease the probabilities of extreme flood losses. The study compares the performance of portfolios each consisting of four types of flood protection assets in a large region of dike rings. A parametric analysis suggests conditions in which diversifications of the types of included flood protection assets decrease extreme flood losses. Increased return periods of extreme losses are associated with portfolios where the asset types have low correlations of economic risk. The effort highlights the importance of understanding correlations across asset types in planning for large-scale flood protection. It allows explicit integration of climate change scenarios in developing flood mitigation strategy. © 2012 Society for Risk Analysis.

  3. DOE Advanced Scientific Computing Advisory Subcommittee (ASCAC) Report: Top Ten Exascale Research Challenges

    Energy Technology Data Exchange (ETDEWEB)

    Lucas, Robert [University of Southern California, Information Sciences Institute; Ang, James [Sandia National Laboratories; Bergman, Keren [Columbia University; Borkar, Shekhar [Intel; Carlson, William [Institute for Defense Analyses; Carrington, Laura [University of California, San Diego; Chiu, George [IBM; Colwell, Robert [DARPA; Dally, William [NVIDIA; Dongarra, Jack [University of Tennessee; Geist, Al [Oak Ridge National Laboratory; Haring, Rud [IBM; Hittinger, Jeffrey [Lawrence Livermore National Laboratory; Hoisie, Adolfy [Pacific Northwest National Laboratory; Klein, Dean Micron; Kogge, Peter [University of Notre Dame; Lethin, Richard [Reservoir Labs; Sarkar, Vivek [Rice University; Schreiber, Robert [Hewlett Packard; Shalf, John [Lawrence Berkeley National Laboratory; Sterling, Thomas [Indiana University; Stevens, Rick [Argonne National Laboratory; Bashor, Jon [Lawrence Berkeley National Laboratory; Brightwell, Ron [Sandia National Laboratories; Coteus, Paul [IBM; Debenedictus, Erik [Sandia National Laboratories; Hiller, Jon [Science and Technology Associates; Kim, K. H. [IBM; Langston, Harper [Reservoir Labs; Murphy, Richard Micron; Webster, Clayton [Oak Ridge National Laboratory; Wild, Stefan [Argonne National Laboratory; Grider, Gary [Los Alamos National Laboratory; Ross, Rob [Argonne National Laboratory; Leyffer, Sven [Argonne National Laboratory; Laros III, James [Sandia National Laboratories

    2014-02-10

    Exascale computing systems are essential for the scientific fields that will transform the 21st century global economy, including energy, biotechnology, nanotechnology, and materials science. Progress in these fields is predicated on the ability to perform advanced scientific and engineering simulations, and analyze the deluge of data. On July 29, 2013, ASCAC was charged by Patricia Dehmer, the Acting Director of the Office of Science, to assemble a subcommittee to provide advice on exascale computing. This subcommittee was directed to return a list of no more than ten technical approaches (hardware and software) that will enable the development of a system that achieves the Department's goals for exascale computing. Numerous reports over the past few years have documented the technical challenges and the non¬-viability of simply scaling existing computer designs to reach exascale. The technical challenges revolve around energy consumption, memory performance, resilience, extreme concurrency, and big data. Drawing from these reports and more recent experience, this ASCAC subcommittee has identified the top ten computing technology advancements that are critical to making a capable, economically viable, exascale system.

  4. Effects of participatory ergonomic intervention on the development of upper extremity musculoskeletal disorders and disability in office employees using a computer.

    Science.gov (United States)

    Baydur, Hakan; Ergör, Alp; Demiral, Yücel; Akalın, Elif

    2016-06-16

    To evaluate the participatory ergonomic method on the development of upper extremity musculoskeletal disorders and disability in office employees. This study is a randomized controlled intervention study. It comprised 116 office workers using computers. Those in the intervention group were taught office ergonomics and the risk assessment method. Cox proportional hazards model and generalized estimating equations (GEEs) were used. In the 10-month postintervention follow-up, the possibility of developing symptoms was 50.9%. According to multivariate analysis results, the possibility of developing symptoms on the right side of the neck and in the right wrist and hand was significantly less in the intervention group than in the control group (pergonomic intervention decreases the possibility of musculoskeletal complaints and disability/symptom level in office workers.

  5. Radiologic diagnosis of malignant soft-tissue tumors of the extremities

    International Nuclear Information System (INIS)

    Peters, P.E.; Friedmann, G.

    1983-01-01

    In malignant soft-tissue tumors of the extremities the radiologist is asked to define size and extent of the lesion and it's relationship to adjacent structures. The assessment of the nature of the lesion is of utmost importance, however, the contribution of the different imaging modalities varies considerably. In a review article the current roles of conventional radiography, xeroradiography, real-time ultrasonography, computed tomography and arteriography in the diagnostic workup of malignant soft-tissue tumors of the extremities are discussed. The statements made are based upon own comparative studies as well as on a review of the literature. In the assessment of the nature of a soft-tissue mass the contribution of all radiologic imaging methods is rather limited, although arteriography may add valuable information if performed complementary to CT. Real-time ultrasonography is well suited to define size, location and extent of peripheral soft-tissue masses. It is therefore recommended as the first imaging method and for follow-up studies. Equivocal findings by real-time sonography and new cases for treatment planning must be confirmed by computed tomography which proved to be the most reliable and the best reproducible imaging method for soft-tissue tumors of the extremities. (orig.)

  6. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Control modules -- Volume 1, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Landers, N.F.; Petrie, L.M.; Knight, J.R. [Oak Ridge National Lab., TN (United States)] [and others

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3 for the documentation of the data libraries and subroutine libraries.

  7. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Control modules -- Volume 1, Revision 4

    International Nuclear Information System (INIS)

    Landers, N.F.; Petrie, L.M.; Knight, J.R.

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3 for the documentation of the data libraries and subroutine libraries

  8. THE DECAY OF A WEAK LARGE-SCALE MAGNETIC FIELD IN TWO-DIMENSIONAL TURBULENCE

    Energy Technology Data Exchange (ETDEWEB)

    Kondić, Todor; Hughes, David W.; Tobias, Steven M., E-mail: t.kondic@leeds.ac.uk [Department of Applied Mathematics, University of Leeds, Leeds LS2 9JT (United Kingdom)

    2016-06-01

    We investigate the decay of a large-scale magnetic field in the context of incompressible, two-dimensional magnetohydrodynamic turbulence. It is well established that a very weak mean field, of strength significantly below equipartition value, induces a small-scale field strong enough to inhibit the process of turbulent magnetic diffusion. In light of ever-increasing computer power, we revisit this problem to investigate fluids and magnetic Reynolds numbers that were previously inaccessible. Furthermore, by exploiting the relation between the turbulent diffusion of the magnetic potential and that of the magnetic field, we are able to calculate the turbulent magnetic diffusivity extremely accurately through the imposition of a uniform mean magnetic field. We confirm the strong dependence of the turbulent diffusivity on the product of the magnetic Reynolds number and the energy of the large-scale magnetic field. We compare our findings with various theoretical descriptions of this process.

  9. Large Spatial Scale Ground Displacement Mapping through the P-SBAS Processing of Sentinel-1 Data on a Cloud Computing Environment

    Science.gov (United States)

    Casu, F.; Bonano, M.; de Luca, C.; Lanari, R.; Manunta, M.; Manzo, M.; Zinno, I.

    2017-12-01

    Since its launch in 2014, the Sentinel-1 (S1) constellation has played a key role on SAR data availability and dissemination all over the World. Indeed, the free and open access data policy adopted by the European Copernicus program together with the global coverage acquisition strategy, make the Sentinel constellation as a game changer in the Earth Observation scenario. Being the SAR data become ubiquitous, the technological and scientific challenge is focused on maximizing the exploitation of such huge data flow. In this direction, the use of innovative processing algorithms and distributed computing infrastructures, such as the Cloud Computing platforms, can play a crucial role. In this work we present a Cloud Computing solution for the advanced interferometric (DInSAR) processing chain based on the Parallel SBAS (P-SBAS) approach, aimed at processing S1 Interferometric Wide Swath (IWS) data for the generation of large spatial scale deformation time series in efficient, automatic and systematic way. Such a DInSAR chain ingests Sentinel 1 SLC images and carries out several processing steps, to finally compute deformation time series and mean deformation velocity maps. Different parallel strategies have been designed ad hoc for each processing step of the P-SBAS S1 chain, encompassing both multi-core and multi-node programming techniques, in order to maximize the computational efficiency achieved within a Cloud Computing environment and cut down the relevant processing times. The presented P-SBAS S1 processing chain has been implemented on the Amazon Web Services platform and a thorough analysis of the attained parallel performances has been performed to identify and overcome the major bottlenecks to the scalability. The presented approach is used to perform national-scale DInSAR analyses over Italy, involving the processing of more than 3000 S1 IWS images acquired from both ascending and descending orbits. Such an experiment confirms the big advantage of

  10. Computer-aided classification of forest cover types from small scale aerial photography

    Science.gov (United States)

    Bliss, John C.; Bonnicksen, Thomas M.; Mace, Thomas H.

    1980-11-01

    The US National Park Service must map forest cover types over extensive areas in order to fulfill its goal of maintaining or reconstructing presettlement vegetation within national parks and monuments. Furthermore, such cover type maps must be updated on a regular basis to document vegetation changes. Computer-aided classification of small scale aerial photography is a promising technique for generating forest cover type maps efficiently and inexpensively. In this study, seven cover types were classified with an overall accuracy of 62 percent from a reproduction of a 1∶120,000 color infrared transparency of a conifer-hardwood forest. The results were encouraging, given the degraded quality of the photograph and the fact that features were not centered, as well as the lack of information on lens vignetting characteristics to make corrections. Suggestions are made for resolving these problems in future research and applications. In addition, it is hypothesized that the overall accuracy is artificially low because the computer-aided classification more accurately portrayed the intermixing of cover types than the hand-drawn maps to which it was compared.

  11. Australia's Unprecedented Future Temperature Extremes Under Paris Limits to Warming

    Science.gov (United States)

    Lewis, Sophie C.; King, Andrew D.; Mitchell, Daniel M.

    2017-10-01

    Record-breaking temperatures can detrimentally impact ecosystems, infrastructure, and human health. Previous studies show that climate change has influenced some observed extremes, which are expected to become more frequent under enhanced future warming. Understanding the magnitude, as a well as frequency, of such future extremes is critical for limiting detrimental impacts. We focus on temperature changes in Australian regions, including over a major coral reef-building area, and assess the potential magnitude of future extreme temperatures under Paris Agreement global warming targets (1.5°C and 2°C). Under these limits to global mean warming, we determine a set of projected high-magnitude unprecedented Australian temperature extremes. These include extremes unexpected based on observational temperatures, including current record-breaking events. For example, while the difference in global-average warming during the hottest Australian summer and the 2°C Paris target is 1.1°C, extremes of 2.4°C above the observed summer record are simulated. This example represents a more than doubling of the magnitude of extremes, compared with global mean change, and such temperatures are unexpected based on the observed record alone. Projected extremes do not necessarily scale linearly with mean global warming, and this effect demonstrates the significant potential benefits of limiting warming to 1.5°C, compared to 2°C or warmer.

  12. Extreme value prediction of the wave-induced vertical bending moment in large container ships

    DEFF Research Database (Denmark)

    Andersen, Ingrid Marie Vincent; Jensen, Jørgen Juncher

    2015-01-01

    increase the extreme hull girder response significantly. Focus in the present paper is on the influence of the hull girder flexibility on the extreme response amidships, namely the wave-induced vertical bending moment (VBM) in hogging, and the prediction of the extreme value of the same. The analysis...... in the present paper is based on time series of full scale measurements from three large container ships of 8600, 9400 and 14000 TEU. When carrying out the extreme value estimation the peak-over-threshold (POT) method combined with an appropriate extreme value distribution is applied. The choice of a proper...... threshold level as well as the statistical correlation between clustered peaks influence the extreme value prediction and are taken into consideration in the present paper....

  13. Application of the Most Likely Extreme Response Method for Wave Energy Converters: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Quon, Eliot; Platt, Andrew; Yu, Yi-Hsiang; Lawson, Michael

    2016-07-01

    Extreme loads are often a key cost driver for wave energy converters (WECs). As an alternative to exhaustive Monte Carlo or long-term simulations, the most likely extreme response (MLER) method allows mid- and high-fidelity simulations to be used more efficiently in evaluating WEC response to events at the edges of the design envelope, and is therefore applicable to system design analysis. The study discussed in this paper applies the MLER method to investigate the maximum heave, pitch, and surge force of a point absorber WEC. Most likely extreme waves were obtained from a set of wave statistics data based on spectral analysis and the response amplitude operators (RAOs) of the floating body; the RAOs were computed from a simple radiation-and-diffraction-theory-based numerical model. A weakly nonlinear numerical method and a computational fluid dynamics (CFD) method were then applied to compute the short-term response to the MLER wave. Effects of nonlinear wave and floating body interaction on the WEC under the anticipated 100-year waves were examined by comparing the results from the linearly superimposed RAOs, the weakly nonlinear model, and CFD simulations. Overall, the MLER method was successfully applied. In particular, when coupled to a high-fidelity CFD analysis, the nonlinear fluid dynamics can be readily captured.

  14. Multi-scale computation methods: Their applications in lithium-ion battery research and development

    Science.gov (United States)

    Siqi, Shi; Jian, Gao; Yue, Liu; Yan, Zhao; Qu, Wu; Wangwei, Ju; Chuying, Ouyang; Ruijuan, Xiao

    2016-01-01

    Based upon advances in theoretical algorithms, modeling and simulations, and computer technologies, the rational design of materials, cells, devices, and packs in the field of lithium-ion batteries is being realized incrementally and will at some point trigger a paradigm revolution by combining calculations and experiments linked by a big shared database, enabling accelerated development of the whole industrial chain. Theory and multi-scale modeling and simulation, as supplements to experimental efforts, can help greatly to close some of the current experimental and technological gaps, as well as predict path-independent properties and help to fundamentally understand path-independent performance in multiple spatial and temporal scales. Project supported by the National Natural Science Foundation of China (Grant Nos. 51372228 and 11234013), the National High Technology Research and Development Program of China (Grant No. 2015AA034201), and Shanghai Pujiang Program, China (Grant No. 14PJ1403900).

  15. Precipitation extremes and their relation to climatic indices in the Pacific Northwest USA

    Science.gov (United States)

    Zarekarizi, Mahkameh; Rana, Arun; Moradkhani, Hamid

    2018-06-01

    There has been focus on the influence of climate indices on precipitation extremes in the literature. Current study presents the evaluation of the precipitation-based extremes in Columbia River Basin (CRB) in the Pacific Northwest USA. We first analyzed the precipitation-based extremes using statistically (ten GCMs) and dynamically downscaled (three GCMs) past and future climate projections. Seven precipitation-based indices that help inform about the flood duration/intensity are used. These indices help in attaining first-hand information on spatial and temporal scales for different service sectors including energy, agriculture, forestry etc. Evaluation of these indices is first performed in historical period (1971-2000) followed by analysis of their relation to large scale tele-connections. Further we mapped these indices over the area to evaluate the spatial variation of past and future extremes in downscaled and observational data. The analysis shows that high values of extreme indices are clustered in either western or northern parts of the basin for historical period whereas the northern part is experiencing higher degree of change in the indices for future scenario. The focus is also on evaluating the relation of these extreme indices to climate tele-connections in historical period to understand their relationship with extremes over CRB. Various climate indices are evaluated for their relationship using Principal Component Analysis (PCA) and Singular Value Decomposition (SVD). Results indicated that, out of 13 climate tele-connections used in the study, CRB is being most affected inversely by East Pacific (EP), Western Pacific (WP), East Atlantic (EA) and North Atlaentic Oscillation (NAO).

  16. A compliant mechanism for inspecting extremely confined spaces

    Science.gov (United States)

    Mascareñas, David; Moreu, Fernando; Cantu, Precious; Shields, Daniel; Wadden, Jack; El Hadedy, Mohamed; Farrar, Charles

    2017-11-01

    We present a novel, compliant mechanism that provides the capability to navigate extremely confined spaces for the purpose of infrastructure inspection. Extremely confined spaces are commonly encountered during infrastructure inspection. Examples of such spaces can include pipes, conduits, and ventilation ducts. Often these infrastructure features go uninspected simply because there is no viable way to access their interior. In addition, it is not uncommon for extremely confined spaces to possess a maze-like architecture that must be selectively navigated in order to properly perform an inspection. Efforts by the imaging sensor community have resulted in the development of imaging sensors on the millimeter length scale. Due to their compact size, they are able to inspect many extremely confined spaces of interest, however, the means to deliver these sensors to the proper location to obtain the desired images are lacking. To address this problem, we draw inspiration from the field of endoscopic surgery. Specifically we consider the work that has already been done to create long flexible needles that are capable of being steered through the human body. These devices are typically referred to as ‘steerable needles.’ Steerable needle technology is not directly applicable to the problem of navigating maze-like arrangements of extremely confined spaces, but it does provide guidance on how this problem should be approached. Specifically, the super-elastic nitinol tubing material that allows steerable needles to operate is also appropriate for the problem of navigating maze-like arrangements of extremely confined spaces. Furthermore, the portion of the mechanism that enters the extremely confined space is completely mechanical in nature. The mechanical nature of the device is an advantage when the extremely confined space features environmental hazards such as radiation that could degrade an electromechanically operated mechanism. Here, we present a compliant mechanism

  17. Correlation dimension and phase space contraction via extreme value theory

    Science.gov (United States)

    Faranda, Davide; Vaienti, Sandro

    2018-04-01

    We show how to obtain theoretical and numerical estimates of correlation dimension and phase space contraction by using the extreme value theory. The maxima of suitable observables sampled along the trajectory of a chaotic dynamical system converge asymptotically to classical extreme value laws where: (i) the inverse of the scale parameter gives the correlation dimension and (ii) the extremal index is associated with the rate of phase space contraction for backward iteration, which in dimension 1 and 2, is closely related to the positive Lyapunov exponent and in higher dimensions is related to the metric entropy. We call it the Dynamical Extremal Index. Numerical estimates are straightforward to obtain as they imply just a simple fit to a univariate distribution. Numerical tests range from low dimensional maps, to generalized Henon maps and climate data. The estimates of the indicators are particularly robust even with relatively short time series.

  18. Rigorous bounds on survival times in circular accelerators and efficient computation of fringe-field transfer maps

    International Nuclear Information System (INIS)

    Hoffstaetter, G.H.

    1994-12-01

    Analyzing stability of particle motion in storage rings contributes to the general field of stability analysis in weakly nonlinear motion. A method which we call pseudo invariant estimation (PIE) is used to compute lower bounds on the survival time in circular accelerators. The pseudeo invariants needed for this approach are computed via nonlinear perturbative normal form theory and the required global maxima of the highly complicated multivariate functions could only be rigorously bound with an extension of interval arithmetic. The bounds on the survival times are large enough to the relevant; the same is true for the lower bounds on dynamical aperatures, which can be computed. The PIE method can lead to novel design criteria with the objective of maximizing the survival time. A major effort in the direction of rigourous predictions only makes sense if accurate models of accelerators are available. Fringe fields often have a significant influence on optical properties, but the computation of fringe-field maps by DA based integration is slower by several orders of magnitude than DA evaluation of the propagator for main-field maps. A novel computation of fringe-field effects called symplectic scaling (SYSCA) is introduced. It exploits the advantages of Lie transformations, generating functions, and scaling properties and is extremely accurate. The computation of fringe-field maps is typically made nearly two orders of magnitude faster. (orig.)

  19. Translation and cross-cultural adaptation of the lower extremity functional scale into a Brazilian Portuguese version and validation on patients with knee injuries.

    Science.gov (United States)

    Metsavaht, Leonardo; Leporace, Gustavo; Riberto, Marcelo; Sposito, Maria Matilde M; Del Castillo, Letícia N C; Oliveira, Liszt P; Batista, Luiz Alberto

    2012-11-01

    Clinical measurement. To translate and culturally adapt the Lower Extremity Functional Scale (LEFS) into a Brazilian Portuguese version, and to test the construct and content validity and reliability of this version in patients with knee injuries. There is no Brazilian Portuguese version of an instrument to assess the function of the lower extremity after orthopaedic injury. The translation of the original English version of the LEFS into a Brazilian Portuguese version was accomplished using standard guidelines and tested in 31 patients with knee injuries. Subsequently, 87 patients with a variety of knee disorders completed the Brazilian Portuguese LEFS, the Medical Outcomes Study 36-Item Short-Form Health Survey, the Western Ontario and McMaster Universities Osteoarthritis Index, and the International Knee Documentation Committee Subjective Knee Evaluation Form and a visual analog scale for pain. All patients were retested within 2 days to determine reliability of these measures. Validation was assessed by determining the level of association between the Brazilian Portuguese LEFS and the other outcome measures. Reliability was documented by calculating internal consistency, test-retest reliability, and standard error of measurement. The Brazilian Portuguese LEFS had a high level of association with the physical component of the Medical Outcomes Study 36-Item Short-Form Health Survey (r = 0.82), the Western Ontario and McMaster Universities Osteoarthritis Index (r = 0.87), the International Knee Documentation Committee Subjective Knee Evaluation Form (r = 0.82), and the pain visual analog scale (r = -0.60) (all, Pcoefficient = 0.957) of the Brazilian Portuguese version of the LEFS were high. The standard error of measurement was low (3.6) and the agreement was considered high, demonstrated by the small differences between test and retest and the narrow limit of agreement, as observed in Bland-Altman and survival-agreement plots. The translation of the LEFS into a

  20. Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing.

    Science.gov (United States)

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance; Messina, Thomas; Fan, Hongtao; Jaeger, Edward; Stephens, Susan

    2013-06-27

    Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available