WorldWideScience

Sample records for supercomputers topics considered

  1. Supercomputational science

    CERN Document Server

    Wilson, S

    1990-01-01

    In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School ...

  2. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  3. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  4. Computational Dimensionalities of Global Supercomputing

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2013-12-01

    Full Text Available This Invited Paper pertains to subject of my Plenary Keynote Speech at the 17th World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI 2013 held in Orlando, Florida on July 9-12, 2013. The title of my Plenary Keynote Speech was: "Dimensionalities of Computation: from Global Supercomputing to Data, Text and Web Mining" but this Invited Paper will focus only on the "Computational Dimensionalities of Global Supercomputing" and is based upon a summary of the contents of several individual articles that have been previously written with myself as lead author and published in [75], [76], [77], [78], [79], [80] and [11]. The topics of these of the Plenary Speech included Overview of Current Research in Global Supercomputing [75], Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing [76], Data Mining Supercomputing with SAS™ JMP® Genomics ([77], [79], [80], and Visualization by Supercomputing Data Mining [81]. ______________________ [11.] Committee on the Future of Supercomputing, National Research Council (2003, The Future of Supercomputing: An Interim Report, ISBN-13: 978-0-309-09016- 2, http://www.nap.edu/catalog/10784.html [75.] Segall, Richard S.; Zhang, Qingyu and Cook, Jeffrey S.(2013, "Overview of Current Research in Global Supercomputing", Proceedings of Forty- Fourth Meeting of Southwest Decision Sciences Institute (SWDSI, Albuquerque, NM, March 12-16, 2013. [76.] Segall, Richard S. and Zhang, Qingyu (2010, "Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing", Proceedings of 5th INFORMS Workshop on Data Mining and Health Informatics, Austin, TX, November 6, 2010. [77.] Segall, Richard S., Zhang, Qingyu and Pierce, Ryan M.(2010, "Data Mining Supercomputing with SAS™ JMP®; Genomics: Research-in-Progress, Proceedings of 2010 Conference on Applied Research in Information Technology, sponsored by

  5. Considering a new domain for antimicrobial stewardship: Topical antibiotics in the open surgical wound.

    Science.gov (United States)

    Edmiston, Charles E; Leaper, David; Spencer, Maureen; Truitt, Karen; Litz Fauerbach, Loretta; Graham, Denise; Johnson, Helen Boehm

    2017-11-01

    The global push to combat the problem of antimicrobial resistance has led to the development of antimicrobial stewardship programs (ASPs), which were recently mandated by The Joint Commission and the Centers for Medicare and Medicaid Services. However, the use of topical antibiotics in the open surgical wound is often not monitored by these programs nor is it subject to any evidence-based standardization of care. Survey results indicate that the practice of using topical antibiotics intraoperatively, in both irrigation fluids and powders, is widespread. Given the risks inherent in their use and the lack of evidence supporting it, the practice should be monitored as a core part of ASPs, and alternative agents, such as antiseptics, should be considered. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  6. What is supercomputing ?

    International Nuclear Information System (INIS)

    Asai, Kiyoshi

    1992-01-01

    Supercomputing means the high speed computation using a supercomputer. Supercomputers and the technical term ''supercomputing'' have spread since ten years ago. The performances of the main computers installed so far in Japan Atomic Energy Research Institute are compared. There are two methods to increase computing speed by using existing circuit elements, parallel processor system and vector processor system. CRAY-1 is the first successful vector computer. Supercomputing technology was first applied to meteorological organizations in foreign countries, and to aviation and atomic energy research institutes in Japan. The supercomputing for atomic energy depends on the trend of technical development in atomic energy, and the contents are divided into the increase of computing speed in existing simulation calculation and the acceleration of the new technical development of atomic energy. The examples of supercomputing in Japan Atomic Energy Research Institute are reported. (K.I.)

  7. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee; Kaushik, Dinesh; Winfer, Andrew

    2011-01-01

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  8. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee

    2011-11-15

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  9. Desktop supercomputer: what can it do?

    Science.gov (United States)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  10. Desktop supercomputer: what can it do?

    International Nuclear Information System (INIS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-01-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  11. Status reports of supercomputing astrophysics in Japan

    International Nuclear Information System (INIS)

    Nakamura, Takashi; Nagasawa, Mikio

    1990-01-01

    The Workshop on Supercomputing Astrophysics was held at National Laboratory for High Energy Physics (KEK, Tsukuba) from August 31 to September 2, 1989. More than 40 participants of physicists, astronomers were attendant and discussed many topics in the informal atmosphere. The main purpose of this workshop was focused on the theoretical activities in computational astrophysics in Japan. It was also aimed to promote effective collaboration between the numerical experimentists working on supercomputing technique. The various subjects of the presented papers of hydrodynamics, plasma physics, gravitating systems, radiative transfer and general relativity are all stimulating. In fact, these numerical calculations become possible now in Japan owing to the power of Japanese supercomputer such as HITAC S820, Fujitsu VP400E and NEC SX-2. (J.P.N.)

  12. Japanese supercomputer technology

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Ewald, R.H.; Worlton, W.J.

    1982-01-01

    In February 1982, computer scientists from the Los Alamos National Laboratory and Lawrence Livermore National Laboratory visited several Japanese computer manufacturers. The purpose of these visits was to assess the state of the art of Japanese supercomputer technology and to advise Japanese computer vendors of the needs of the US Department of Energy (DOE) for more powerful supercomputers. The Japanese foresee a domestic need for large-scale computing capabilities for nuclear fusion, image analysis for the Earth Resources Satellite, meteorological forecast, electrical power system analysis (power flow, stability, optimization), structural and thermal analysis of satellites, and very large scale integrated circuit design and simulation. To meet this need, Japan has launched an ambitious program to advance supercomputer technology. This program is described

  13. Supercomputers to transform Science

    CERN Multimedia

    2006-01-01

    "New insights into the structure of space and time, climate modeling, and the design of novel drugs, are but a few of the many research areas that will be transforned by the installation of three supercomputers at the Unversity of Bristol." (1/2 page)

  14. Supercomputers Of The Future

    Science.gov (United States)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  15. Introduction to Reconfigurable Supercomputing

    CERN Document Server

    Lanzagorta, Marco; Rosenberg, Robert

    2010-01-01

    This book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPe who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigur

  16. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  17. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  18. KfK-seminar series on supercomputing und visualization from May till September 1992

    International Nuclear Information System (INIS)

    Hohenhinnebusch, W.

    1993-05-01

    During the period of may 1992 to september 1992 a series of seminars was held at KfK on several topics of supercomputing in different fields of application. The aim was to demonstrate the importance of supercomputing and visualization in numerical simulations of complex physical and technical phenomena. This report contains the collection of all submitted seminar papers. (orig./HP) [de

  19. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 2

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  20. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 1

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  1. The ETA10 supercomputer system

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, Inc. ETA 10 is a next-generation supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamics MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed. (orig.)

  2. World's fastest supercomputer opens up to users

    Science.gov (United States)

    Xin, Ling

    2016-08-01

    China's latest supercomputer - Sunway TaihuLight - has claimed the crown as the world's fastest computer according to the latest TOP500 list, released at the International Supercomputer Conference in Frankfurt in late June.

  3. Supercomputing and related national projects in Japan

    International Nuclear Information System (INIS)

    Miura, Kenichi

    1985-01-01

    Japanese supercomputer development activities in the industry and research projects are outlined. Architecture, technology, software, and applications of Fujitsu's Vector Processor Systems are described as an example of Japanese supercomputers. Applications of supercomputers to high energy physics are also discussed. (orig.)

  4. Mistral Supercomputer Job History Analysis

    OpenAIRE

    Zasadziński, Michał; Muntés-Mulero, Victor; Solé, Marc; Ludwig, Thomas

    2018-01-01

    In this technical report, we show insights and results of operational data analysis from petascale supercomputer Mistral, which is ranked as 42nd most powerful in the world as of January 2018. Data sources include hardware monitoring data, job scheduler history, topology, and hardware information. We explore job state sequences, spatial distribution, and electric power patterns.

  5. Supercomputers and quantum field theory

    International Nuclear Information System (INIS)

    Creutz, M.

    1985-01-01

    A review is given of why recent simulations of lattice gauge theories have resulted in substantial demands from particle theorists for supercomputer time. These calculations have yielded first principle results on non-perturbative aspects of the strong interactions. An algorithm for simulating dynamical quark fields is discussed. 14 refs

  6. Computational plasma physics and supercomputers

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1984-09-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular codes, but parallel processing poses new coding difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematics

  7. Supercomputer debugging workshop `92

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.S.

    1993-02-01

    This report contains papers or viewgraphs on the following topics: The ABCs of Debugging in the 1990s; Cray Computer Corporation; Thinking Machines Corporation; Cray Research, Incorporated; Sun Microsystems, Inc; Kendall Square Research; The Effects of Register Allocation and Instruction Scheduling on Symbolic Debugging; Debugging Optimized Code: Currency Determination with Data Flow; A Debugging Tool for Parallel and Distributed Programs; Analyzing Traces of Parallel Programs Containing Semaphore Synchronization; Compile-time Support for Efficient Data Race Detection in Shared-Memory Parallel Programs; Direct Manipulation Techniques for Parallel Debuggers; Transparent Observation of XENOOPS Objects; A Parallel Software Monitor for Debugging and Performance Tools on Distributed Memory Multicomputers; Profiling Performance of Inter-Processor Communications in an iWarp Torus; The Application of Code Instrumentation Technology in the Los Alamos Debugger; and CXdb: The Road to Remote Debugging.

  8. An assessment of worldwide supercomputer usage

    Energy Technology Data Exchange (ETDEWEB)

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  9. NASA Advanced Supercomputing Facility Expansion

    Science.gov (United States)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  10. ATLAS Software Installation on Supercomputers

    CERN Document Server

    Undrus, Alexander; The ATLAS collaboration

    2018-01-01

    PowerPC and high performance computers (HPC) are important resources for computing in the ATLAS experiment. The future LHC data processing will require more resources than Grid computing, currently using approximately 100,000 cores at well over 100 sites, can provide. Supercomputers are extremely powerful as they use resources of hundreds of thousands CPUs joined together. However their architectures have different instruction sets. ATLAS binary software distributions for x86 chipsets do not fit these architectures, as emulation of these chipsets results in huge performance loss. This presentation describes the methodology of ATLAS software installation from source code on supercomputers. The installation procedure includes downloading the ATLAS code base as well as the source of about 50 external packages, such as ROOT and Geant4, followed by compilation, and rigorous unit and integration testing. The presentation reports the application of this procedure at Titan HPC and Summit PowerPC at Oak Ridge Computin...

  11. Status of supercomputers in the US

    International Nuclear Information System (INIS)

    Fernbach, S.

    1985-01-01

    Current Supercomputers; that is, the Class VI machines which first became available in 1976 are being delivered in greater quantity than ever before. In addition, manufacturers are busily working on Class VII machines to be ready for delivery in CY 1987. Mainframes are being modified or designed to take on some features of the supercomputers and new companies with the intent of either competing directly in the supercomputer arena or in providing entry-level systems from which to graduate to supercomputers are springing up everywhere. Even well founded organizations like IBM and CDC are adding machines with vector instructions in their repertoires. Japanese - manufactured supercomputers are also being introduced into the U.S. Will these begin to compete with those of U.S. manufacture. Are they truly competitive. It turns out that both from the hardware and software points of view they may be superior. We may be facing the same problems in supercomputers that we faced in videosystems

  12. TOP500 Supercomputers for June 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  13. Supercomputer applications in nuclear research

    International Nuclear Information System (INIS)

    Ishiguro, Misako

    1992-01-01

    The utilization of supercomputers in Japan Atomic Energy Research Institute is mainly reported. The fields of atomic energy research which use supercomputers frequently and the contents of their computation are outlined. What is vectorizing is simply explained, and nuclear fusion, nuclear reactor physics, the hydrothermal safety of nuclear reactors, the parallel property that the atomic energy computations of fluids and others have, the algorithm for vector treatment and the effect of speed increase by vectorizing are discussed. At present Japan Atomic Energy Research Institute uses two systems of FACOM VP 2600/10 and three systems of M-780. The contents of computation changed from criticality computation around 1970, through the analysis of LOCA after the TMI accident, to nuclear fusion research, the design of new type reactors and reactor safety assessment at present. Also the method of using computers advanced from batch processing to time sharing processing, from one-dimensional to three dimensional computation, from steady, linear to unsteady nonlinear computation, from experimental analysis to numerical simulation and so on. (K.I.)

  14. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  15. TOP500 Supercomputers for June 2005

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  16. TOP500 Supercomputers for November 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  17. TOP500 Supercomputers for November 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  18. Quantum Hamiltonian Physics with Supercomputers

    International Nuclear Information System (INIS)

    Vary, James P.

    2014-01-01

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed

  19. Plasma turbulence calculations on supercomputers

    International Nuclear Information System (INIS)

    Carreras, B.A.; Charlton, L.A.; Dominguez, N.; Drake, J.B.; Garcia, L.; Leboeuf, J.N.; Lee, D.K.; Lynch, V.E.; Sidikman, K.

    1991-01-01

    Although the single-particle picture of magnetic confinement is helpful in understanding some basic physics of plasma confinement, it does not give a full description. Collective effects dominate plasma behavior. Any analysis of plasma confinement requires a self-consistent treatment of the particles and fields. The general picture is further complicated because the plasma, in general, is turbulent. The study of fluid turbulence is a rather complex field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples back to the fluid. Since the fluid is not a perfect conductor, this turbulence can lead to changes in the topology of the magnetic field structure, causing the magnetic field lines to wander radially. Because the plasma fluid flows along field lines, they carry the particles with them, and this enhances the losses caused by collisions. The changes in topology are critical for the plasma confinement. The study of plasma turbulence and the concomitant transport is a challenging problem. Because of the importance of solving the plasma turbulence problem for controlled thermonuclear research, the high complexity of the problem, and the necessity of attacking the problem with supercomputers, the study of plasma turbulence in magnetic confinement devices is a Grand Challenge problem

  20. Quantum Hamiltonian Physics with Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Vary, James P.

    2014-06-15

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed.

  1. Guide to dataflow supercomputing basic concepts, case studies, and a detailed example

    CERN Document Server

    Milutinovic, Veljko; Trifunovic, Nemanja; Giorgi, Roberto

    2015-01-01

    This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; revie

  2. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy; Katz, Daniel S.; Binkowski, T. Andrew; Zhong, Xiaoliang; Heinonen, Olle; Karpeyev, Dmitry; Wilde, Michael

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt's sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.

  3. Development of seismic tomography software for hybrid supercomputers

    Science.gov (United States)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on

  4. A training program for scientific supercomputing users

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  5. TOP500 Supercomputers for June 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  6. TOP500 Supercomputers for June 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  7. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  8. The ETA systems plans for supercomputers

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, is a class VII supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamic MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed

  9. Adaptability of supercomputers to nuclear computations

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Ishiguro, Misako; Matsuura, Toshihiko.

    1983-01-01

    Recently in the field of scientific and technical calculation, the usefulness of supercomputers represented by CRAY-1 has been recognized, and they are utilized in various countries. The rapid computation of supercomputers is based on the function of vector computation. The authors investigated the adaptability to vector computation of about 40 typical atomic energy codes for the past six years. Based on the results of investigation, the adaptability of the function of vector computation that supercomputers have to atomic energy codes, the problem regarding the utilization and the future prospect are explained. The adaptability of individual calculation codes to vector computation is largely dependent on the algorithm and program structure used for the codes. The change to high speed by pipeline vector system, the investigation in the Japan Atomic Energy Research Institute and the results, and the examples of expressing the codes for atomic energy, environmental safety and nuclear fusion by vector are reported. The magnification of speed up for 40 examples was from 1.5 to 9.0. It can be said that the adaptability of supercomputers to atomic energy codes is fairly good. (Kako, I.)

  10. Computational plasma physics and supercomputers. Revision 1

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1985-01-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular models, but parallel processing poses new programming difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematical models

  11. Supercomputer debugging workshop '92

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.S.

    1993-01-01

    This report contains papers or viewgraphs on the following topics: The ABCs of Debugging in the 1990s; Cray Computer Corporation; Thinking Machines Corporation; Cray Research, Incorporated; Sun Microsystems, Inc; Kendall Square Research; The Effects of Register Allocation and Instruction Scheduling on Symbolic Debugging; Debugging Optimized Code: Currency Determination with Data Flow; A Debugging Tool for Parallel and Distributed Programs; Analyzing Traces of Parallel Programs Containing Semaphore Synchronization; Compile-time Support for Efficient Data Race Detection in Shared-Memory Parallel Programs; Direct Manipulation Techniques for Parallel Debuggers; Transparent Observation of XENOOPS Objects; A Parallel Software Monitor for Debugging and Performance Tools on Distributed Memory Multicomputers; Profiling Performance of Inter-Processor Communications in an iWarp Torus; The Application of Code Instrumentation Technology in the Los Alamos Debugger; and CXdb: The Road to Remote Debugging.

  12. Graphics supercomputer for computational fluid dynamics research

    Science.gov (United States)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  13. FPS scientific and supercomputers computers in chemistry

    International Nuclear Information System (INIS)

    Curington, I.J.

    1987-01-01

    FPS Array Processors, scientific computers, and highly parallel supercomputers are used in nearly all aspects of compute-intensive computational chemistry. A survey is made of work utilizing this equipment, both published and current research. The relationship of the computer architecture to computational chemistry is discussed, with specific reference to Molecular Dynamics, Quantum Monte Carlo simulations, and Molecular Graphics applications. Recent installations of the FPS T-Series are highlighted, and examples of Molecular Graphics programs running on the FPS-5000 are shown

  14. Problem solving in nuclear engineering using supercomputers

    International Nuclear Information System (INIS)

    Schmidt, F.; Scheuermann, W.; Schatz, A.

    1987-01-01

    The availability of supercomputers enables the engineer to formulate new strategies for problem solving. One such strategy is the Integrated Planning and Simulation System (IPSS). With the integrated systems, simulation models with greater consistency and good agreement with actual plant data can be effectively realized. In the present work some of the basic ideas of IPSS are described as well as some of the conditions necessary to build such systems. Hardware and software characteristics as realized are outlined. (orig.) [de

  15. SUPERCOMPUTER SIMULATION OF CRITICAL PHENOMENA IN COMPLEX SOCIAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Petrus M.A. Sloot

    2014-09-01

    Full Text Available The paper describes a problem of computer simulation of critical phenomena in complex social systems on a petascale computing systems in frames of complex networks approach. The three-layer system of nested models of complex networks is proposed including aggregated analytical model to identify critical phenomena, detailed model of individualized network dynamics and model to adjust a topological structure of a complex network. The scalable parallel algorithm covering all layers of complex networks simulation is proposed. Performance of the algorithm is studied on different supercomputing systems. The issues of software and information infrastructure of complex networks simulation are discussed including organization of distributed calculations, crawling the data in social networks and results visualization. The applications of developed methods and technologies are considered including simulation of criminal networks disruption, fast rumors spreading in social networks, evolution of financial networks and epidemics spreading.

  16. A workbench for tera-flop supercomputing

    International Nuclear Information System (INIS)

    Resch, M.M.; Kuester, U.; Mueller, M.S.; Lang, U.

    2003-01-01

    Supercomputers currently reach a peak performance in the range of TFlop/s. With but one exception - the Japanese Earth Simulator - none of these systems has so far been able to also show a level of sustained performance for a variety of applications that comes close to the peak performance. Sustained TFlop/s are therefore rarely seen. The reasons are manifold and are well known: Bandwidth and latency both for main memory and for the internal network are the key internal technical problems. Cache hierarchies with large caches can bring relief but are no remedy to the problem. However, there are not only technical problems that inhibit the full exploitation by scientists of the potential of modern supercomputers. More and more organizational issues come to the forefront. This paper shows the approach of the High Performance Computing Center Stuttgart (HLRS) to deliver a sustained performance of TFlop/s for a wide range of applications from a large group of users spread over Germany. The core of the concept is the role of the data. Around this we design a simulation workbench that hides the complexity of interacting computers, networks and file systems from the user. (authors)

  17. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  18. PNNL supercomputer to become largest computing resource on the Grid

    CERN Multimedia

    2002-01-01

    Hewlett Packard announced that the US DOE Pacific Northwest National Laboratory will connect a 9.3-teraflop HP supercomputer to the DOE Science Grid. This will be the largest supercomputer attached to a computer grid anywhere in the world (1 page).

  19. Supercomputing - Use Cases, Advances, The Future (2/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the second day, we will focus on software and software paradigms driving supercomputers, workloads that need supercomputing treatment, advances in technology and possible future developments. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and i...

  20. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-01-01

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  1. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  2. Supercomputing Centers and Electricity Service Providers

    DEFF Research Database (Denmark)

    Patki, Tapasya; Bates, Natalie; Ghatikar, Girish

    2016-01-01

    from a detailed, quantitative survey-based analysis and compare the perspectives of the European grid and SCs to the ones of the United States (US). We then show that contrary to the expectation, SCs in the US are more open toward cooperating and developing demand-management strategies with their ESPs......Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates...... this problem. In order to develop a symbiotic relationship between the SCs and their ESPs and to support effective power management at all levels, it is critical to understand and analyze how the existing relationships were formed and how these are expected to evolve. In this paper, we first present results...

  3. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen-Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2018-05-15

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time and supports DMA functionality allowing for parallel processing message-passing.

  4. OpenMP Performance on the Columbia Supercomputer

    Science.gov (United States)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  5. The Pawsey Supercomputer geothermal cooling project

    Science.gov (United States)

    Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.

    2010-12-01

    The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air

  6. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  7. Supercomputing - Use Cases, Advances, The Future (1/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the first day, we will focus on the history and theory of supercomputing, the top500 list and the hardware that makes supercomputers tick. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP an...

  8. JINR supercomputer of the module type for event parallel analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    A model of a supercomputer with 50 million of operations per second is suggested. Its realization allows one to solve JINR data analysis problems for large spectrometers (in particular DELPHY collaboration). The suggested module supercomputer is based on 32-bit commercial available microprocessor with a processing rate of about 1 MFLOPS. The processors are combined by means of VME standard busbars. MicroVAX-11 is a host computer organizing the operation of the system. Data input and output is realized via microVAX-11 computer periphery. Users' software is based on the FORTRAN-77. The supercomputer is connected with a JINR net port and all JINR users get an access to the suggested system

  9. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jacobsen, Douglas W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ringler, Todd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  10. Comments on the parallelization efficiency of the Sunway TaihuLight supercomputer

    OpenAIRE

    Végh, János

    2016-01-01

    In the world of supercomputers, the large number of processors requires to minimize the inefficiencies of parallelization, which appear as a sequential part of the program from the point of view of Amdahl's law. The recently suggested new figure of merit is applied to the recently presented supercomputer, and the timeline of "Top 500" supercomputers is scrutinized using the metric. It is demonstrated, that in addition to the computing performance and power consumption, the new supercomputer i...

  11. Convex unwraps its first grown-up supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Manuel, T.

    1988-03-03

    Convex Computer Corp.'s new supercomputer family is even more of an industry blockbuster than its first system. At a tenfold jump in performance, it's far from just an incremental upgrade over its first minisupercomputer, the C-1. The heart of the new family, the new C-2 processor, churning at 50 million floating-point operations/s, spawns a group of systems whose performance could pass for some fancy supercomputers-namely those of the Cray Research Inc. family. When added to the C-1, Convex's five new supercomputers create the C series, a six-member product group offering a performance range from 20 to 200 Mflops. They mark an important transition for Convex from a one-product high-tech startup to a multinational company with a wide-ranging product line. It's a tough transition but the Richardson, Texas, company seems to be doing it. The extended product line propels Convex into the upper end of the minisupercomputer class and nudges it into the low end of the big supercomputers. It positions Convex in an uncrowded segment of the market in the $500,000 to $1 million range offering 50 to 200 Mflops of performance. The company is making this move because the minisuper area, which it pioneered, quickly became crowded with new vendors, causing prices and gross margins to drop drastically.

  12. QCD on the BlueGene/L Supercomputer

    International Nuclear Information System (INIS)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-01-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented

  13. QCD on the BlueGene/L Supercomputer

    Science.gov (United States)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-03-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.

  14. Supercomputers and the future of computational atomic scattering physics

    International Nuclear Information System (INIS)

    Younger, S.M.

    1989-01-01

    The advent of the supercomputer has opened new vistas for the computational atomic physicist. Problems of hitherto unparalleled complexity are now being examined using these new machines, and important connections with other fields of physics are being established. This talk briefly reviews some of the most important trends in computational scattering physics and suggests some exciting possibilities for the future. 7 refs., 2 figs

  15. Role of supercomputers in magnetic fusion and energy research programs

    International Nuclear Information System (INIS)

    Killeen, J.

    1985-06-01

    The importance of computer modeling in magnetic fusion (MFE) and energy research (ER) programs is discussed. The need for the most advanced supercomputers is described, and the role of the National Magnetic Fusion Energy Computer Center in meeting these needs is explained

  16. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    Science.gov (United States)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  17. Integration of Panda Workload Management System with supercomputers

    Science.gov (United States)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  18. Topics and topic prominence in two sign languages

    NARCIS (Netherlands)

    Kimmelman, V.

    2015-01-01

    In this paper we describe topic marking in Russian Sign Language (RSL) and Sign Language of the Netherlands (NGT) and discuss whether these languages should be considered topic prominent. The formal markers of topics in RSL are sentence-initial position, a prosodic break following the topic, and

  19. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  20. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Wenaus, Torre

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for jo...

  1. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  2. Tryton Supercomputer Capabilities for Analysis of Massive Data Streams

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2015-09-01

    Full Text Available The recently deployed supercomputer Tryton, located in the Academic Computer Center of Gdansk University of Technology, provides great means for massive parallel processing. Moreover, the status of the Center as one of the main network nodes in the PIONIER network enables the fast and reliable transfer of data produced by miscellaneous devices scattered in the area of the whole country. The typical examples of such data are streams containing radio-telescope and satellite observations. Their analysis, especially with real-time constraints, can be challenging and requires the usage of dedicated software components. We propose a solution for such parallel analysis using the supercomputer, supervised by the KASKADA platform, which with the conjunction with immerse 3D visualization techniques can be used to solve problems such as pulsar detection and chronometric or oil-spill simulation on the sea surface.

  3. Visualizing quantum scattering on the CM-2 supercomputer

    International Nuclear Information System (INIS)

    Richardson, J.L.

    1991-01-01

    We implement parallel algorithms for solving the time-dependent Schroedinger equation on the CM-2 supercomputer. These methods are unconditionally stable as well as unitary at each time step and have the advantage of being spatially local and explicit. We show how to visualize the dynamics of quantum scattering using techniques for visualizing complex wave functions. Several scattering problems are solved to demonstrate the use of these methods. (orig.)

  4. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  5. Intelligent Personal Supercomputer for Solving Scientific and Technical Problems

    Directory of Open Access Journals (Sweden)

    Khimich, O.M.

    2016-09-01

    Full Text Available New domestic intellіgent personal supercomputer of hybrid architecture Inparkom_pg for the mathematical modeling of processes in the defense industry, engineering, construction, etc. was developed. Intelligent software for the automatic research and tasks of computational mathematics with approximate data of different structures was designed. Applied software to provide mathematical modeling problems in construction, welding and filtration processes was implemented.

  6. Cellular-automata supercomputers for fluid-dynamics modeling

    International Nuclear Information System (INIS)

    Margolus, N.; Toffoli, T.; Vichniac, G.

    1986-01-01

    We report recent developments in the modeling of fluid dynamics, and give experimental results (including dynamical exponents) obtained using cellular automata machines. Because of their locality and uniformity, cellular automata lend themselves to an extremely efficient physical realization; with a suitable architecture, an amount of hardware resources comparable to that of a home computer can achieve (in the simulation of cellular automata) the performance of a conventional supercomputer

  7. Proceedings of the first energy research power supercomputer users symposium

    International Nuclear Information System (INIS)

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. ''Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University

  8. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  9. Minarchy Considered

    Directory of Open Access Journals (Sweden)

    Richard A Garner

    2009-09-01

    Full Text Available Whilst some defenders of the minimal, limited state or government hold that the state is “a necessary evil,” others would consider that this claim that the state is evil concedes too much ground to anarchists. In this article I intend to discuss the views of some who believe that government is a good thing, and their arguments for supporting this position. My main conclusions will be that, in each case, the proponents of a minimal state, or “minarchy,” fail to justify as much as what they call government, and so fail to oppose anarchism, or absences of what they call government.

  10. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Science.gov (United States)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  11. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Directory of Open Access Journals (Sweden)

    DeTar Carleton

    2018-01-01

    Full Text Available With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  12. Testosterone Topical

    Science.gov (United States)

    ... not apply any testosterone topical products to your penis or scrotum or to skin that has sores, ... are severe or do not go away: breast enlargement and/or pain decreased sexual desire acne depression ...

  13. Applications of supercomputing and the utility industry: Calculation of power transfer capabilities

    International Nuclear Information System (INIS)

    Jensen, D.D.; Behling, S.R.; Betancourt, R.

    1990-01-01

    Numerical models and iterative simulation using supercomputers can furnish cost-effective answers to utility industry problems that are all but intractable using conventional computing equipment. An example of the use of supercomputers by the utility industry is the determination of power transfer capability limits for power transmission systems. This work has the goal of markedly reducing the run time of transient stability codes used to determine power distributions following major system disturbances. To date, run times of several hours on a conventional computer have been reduced to several minutes on state-of-the-art supercomputers, with further improvements anticipated to reduce run times to less than a minute. In spite of the potential advantages of supercomputers, few utilities have sufficient need for a dedicated in-house supercomputing capability. This problem is resolved using a supercomputer center serving a geographically distributed user base coupled via high speed communication networks

  14. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    Energy Technology Data Exchange (ETDEWEB)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S [Earth Sciences Department. Barcelona Supercomputing Center. Barcelona (Spain); Cuevas, E [Izanaa Atmospheric Research Center. Agencia Estatal de Meteorologia, Tenerife (Spain); Nickovic, S [Atmospheric Research and Environment Branch, World Meteorological Organization, Geneva (Switzerland)], E-mail: carlos.perez@bsc.es

    2009-03-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  15. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    International Nuclear Information System (INIS)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S; Cuevas, E; Nickovic, S

    2009-01-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  16. Supercomputers and the mathematical modeling of high complexity problems

    International Nuclear Information System (INIS)

    Belotserkovskii, Oleg M

    2010-01-01

    This paper is a review of many works carried out by members of our scientific school in past years. The general principles of constructing numerical algorithms for high-performance computers are described. Several techniques are highlighted and these are based on the method of splitting with respect to physical processes and are widely used in computing nonlinear multidimensional processes in fluid dynamics, in studies of turbulence and hydrodynamic instabilities and in medicine and other natural sciences. The advances and developments related to the new generation of high-performance supercomputing in Russia are presented.

  17. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  18. A fast random number generator for the Intel Paragon supercomputer

    Science.gov (United States)

    Gutbrod, F.

    1995-06-01

    A pseudo-random number generator is presented which makes optimal use of the architecture of the i860-microprocessor and which is expected to have a very long period. It is therefore a good candidate for use on the parallel supercomputer Paragon XP. In the assembler version, it needs 6.4 cycles for a real∗4 random number. There is a FORTRAN routine which yields identical numbers up to rare and minor rounding discrepancies, and it needs 28 cycles. The FORTRAN performance on other microprocessors is somewhat better. Arguments for the quality of the generator and some numerical tests are given.

  19. Use of QUADRICS supercomputer as embedded simulator in emergency management systems

    International Nuclear Information System (INIS)

    Bove, R.; Di Costanzo, G.; Ziparo, A.

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system

  20. Bimatoprost Topical

    Science.gov (United States)

    ... not use a cotton swab or any other brush or applicator to apply topical bimatoprost.To use the solution, follow these steps: Wash your hands and face thoroughly with soap and water. Be sure that all makeup is removed. Do not let the tip of ...

  1. Centralized supercomputer support for magnetic fusion energy research

    International Nuclear Information System (INIS)

    Fuss, D.; Tull, G.G.

    1984-01-01

    High-speed computers with large memories are vital to magnetic fusion energy research. Magnetohydrodynamic (MHD), transport, equilibrium, Vlasov, particle, and Fokker-Planck codes that model plasma behavior play an important role in designing experimental hardware and interpreting the resulting data, as well as in advancing plasma theory itself. The size, architecture, and software of supercomputers to run these codes are often the crucial constraints on the benefits such computational modeling can provide. Hence, vector computers such as the CRAY-1 offer a valuable research resource. To meet the computational needs of the fusion program, the National Magnetic Fusion Energy Computer Center (NMFECC) was established in 1974 at the Lawrence Livermore National Laboratory. Supercomputers at the central computing facility are linked to smaller computer centers at each of the major fusion laboratories by a satellite communication network. In addition to providing large-scale computing, the NMFECC environment stimulates collaboration and the sharing of computer codes and data among the many fusion researchers in a cost-effective manner

  2. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-05-15

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation.

  3. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    International Nuclear Information System (INIS)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho

    2008-01-01

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation

  4. Plane-wave electronic structure calculations on a parallel supercomputer

    International Nuclear Information System (INIS)

    Nelson, J.S.; Plimpton, S.J.; Sears, M.P.

    1993-01-01

    The development of iterative solutions of Schrodinger's equation in a plane-wave (pw) basis over the last several years has coincided with great advances in the computational power available for performing the calculations. These dual developments have enabled many new and interesting condensed matter phenomena to be studied from a first-principles approach. The authors present a detailed description of the implementation on a parallel supercomputer (hypercube) of the first-order equation-of-motion solution to Schrodinger's equation, using plane-wave basis functions and ab initio separable pseudopotentials. By distributing the plane-waves across the processors of the hypercube many of the computations can be performed in parallel, resulting in decreases in the overall computation time relative to conventional vector supercomputers. This partitioning also provides ample memory for large Fast Fourier Transform (FFT) meshes and the storage of plane-wave coefficients for many hundreds of energy bands. The usefulness of the parallel techniques is demonstrated by benchmark timings for both the FFT's and iterations of the self-consistent solution of Schrodinger's equation for different sized Si unit cells of up to 512 atoms

  5. Supercomputer algorithms for reactivity, dynamics and kinetics of small molecules

    International Nuclear Information System (INIS)

    Lagana, A.

    1989-01-01

    Even for small systems, the accurate characterization of reactive processes is so demanding of computer resources as to suggest the use of supercomputers having vector and parallel facilities. The full advantages of vector and parallel architectures can sometimes be obtained by simply modifying existing programs, vectorizing the manipulation of vectors and matrices, and requiring the parallel execution of independent tasks. More often, however, a significant time saving can be obtained only when the computer code undergoes a deeper restructuring, requiring a change in the computational strategy or, more radically, the adoption of a different theoretical treatment. This book discusses supercomputer strategies based upon act and approximate methods aimed at calculating the electronic structure and the reactive properties of small systems. The book shows how, in recent years, intense design activity has led to the ability to calculate accurate electronic structures for reactive systems, exact and high-level approximations to three-dimensional reactive dynamics, and to efficient directive and declaratory software for the modelling of complex systems

  6. The TeraGyroid Experiment – Supercomputing 2003

    Directory of Open Access Journals (Sweden)

    R.J. Blake

    2005-01-01

    Full Text Available Amphiphiles are molecules with hydrophobic tails and hydrophilic heads. When dispersed in solvents, they self assemble into complex mesophases including the beautiful cubic gyroid phase. The goal of the TeraGyroid experiment was to study defect pathways and dynamics in these gyroids. The UK's supercomputing and USA's TeraGrid facilities were coupled together, through a dedicated high-speed network, into a single computational Grid for research work that peaked around the Supercomputing 2003 conference. The gyroids were modeled using lattice Boltzmann methods with parameter spaces explored using many 1283 and 3grid point simulations, this data being used to inform the world's largest three-dimensional time dependent simulation with 10243-grid points. The experiment generated some 2 TBytes of useful data. In terms of Grid technology the project demonstrated the migration of simulations (using Globus middleware to and fro across the Atlantic exploiting the availability of resources. Integration of the systems accelerated the time to insight. Distributed visualisation of the output datasets enabled the parameter space of the interactions within the complex fluid to be explored from a number of sites, informed by discourse over the Access Grid. The project was sponsored by EPSRC (UK and NSF (USA with trans-Atlantic optical bandwidth provided by British Telecommunications.

  7. Topical anesthesia

    Directory of Open Access Journals (Sweden)

    Mritunjay Kumar

    2015-01-01

    Full Text Available Topical anesthetics are being widely used in numerous medical and surgical sub-specialties such as anesthesia, ophthalmology, otorhinolaryngology, dentistry, urology, and aesthetic surgery. They cause superficial loss of pain sensation after direct application. Their delivery and effectiveness can be enhanced by using free bases; by increasing the drug concentration, lowering the melting point; by using physical and chemical permeation enhancers and lipid delivery vesicles. Various topical anesthetic agents available for use are eutectic mixture of local anesthetics, ELA-max, lidocaine, epinephrine, tetracaine, bupivanor, 4% tetracaine, benzocaine, proparacaine, Betacaine-LA, topicaine, lidoderm, S-caine patch™ and local anesthetic peel. While using them, careful attention must be paid to their pharmacology, area and duration of application, age and weight of the patients and possible side-effects.

  8. Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data

    Science.gov (United States)

    Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.

    2018-03-01

    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.

  9. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  10. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  11. Cooperative visualization and simulation in a supercomputer environment

    International Nuclear Information System (INIS)

    Ruehle, R.; Lang, U.; Wierse, A.

    1993-01-01

    The article takes a closer look on the requirements being imposed by the idea to integrate all the components into a homogeneous software environment. To this end several methods for the distribtuion of applications in dependence of certain problem types are discussed. The currently available methods at the University of Stuttgart Computer Center for the distribution of applications are further explained. Finally the aims and characteristics of a European sponsored project, called PAGEIN, are explained, which fits perfectly into the line of developments at RUS. The aim of the project is to experiment with future cooperative working modes of aerospace scientists in a high speed distributed supercomputing environment. Project results will have an impact on the development of real future scientific application environments. (orig./DG)

  12. Lectures in Supercomputational Neurosciences Dynamics in Complex Brain Networks

    CERN Document Server

    Graben, Peter beim; Thiel, Marco; Kurths, Jürgen

    2008-01-01

    Computational Neuroscience is a burgeoning field of research where only the combined effort of neuroscientists, biologists, psychologists, physicists, mathematicians, computer scientists, engineers and other specialists, e.g. from linguistics and medicine, seem to be able to expand the limits of our knowledge. The present volume is an introduction, largely from the physicists' perspective, to the subject matter with in-depth contributions by system neuroscientists. A conceptual model for complex networks of neurons is introduced that incorporates many important features of the real brain, such as various types of neurons, various brain areas, inhibitory and excitatory coupling and the plasticity of the network. The computational implementation on supercomputers, which is introduced and discussed in detail in this book, will enable the readers to modify and adapt the algortihm for their own research. Worked-out examples of applications are presented for networks of Morris-Lecar neurons to model the cortical co...

  13. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    Science.gov (United States)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  14. A supercomputing application for reactors core design and optimization

    International Nuclear Information System (INIS)

    Hourcade, Edouard; Gaudier, Fabrice; Arnaud, Gilles; Funtowiez, David; Ammar, Karim

    2010-01-01

    Advanced nuclear reactor designs are often intuition-driven processes where designers first develop or use simplified simulation tools for each physical phenomenon involved. Through the project development, complexity in each discipline increases and implementation of chaining/coupling capabilities adapted to supercomputing optimization process are often postponed to a further step so that task gets increasingly challenging. In the context of renewal in reactor designs, project of first realization are often run in parallel with advanced design although very dependant on final options. As a consequence, the development of tools to globally assess/optimize reactor core features, with the on-going design methods accuracy, is needed. This should be possible within reasonable simulation time and without advanced computer skills needed at project management scale. Also, these tools should be ready to easily cope with modeling progresses in each discipline through project life-time. An early stage development of multi-physics package adapted to supercomputing is presented. The URANIE platform, developed at CEA and based on the Data Analysis Framework ROOT, is very well adapted to this approach. It allows diversified sampling techniques (SRS, LHS, qMC), fitting tools (neuronal networks...) and optimization techniques (genetic algorithm). Also data-base management and visualization are made very easy. In this paper, we'll present the various implementing steps of this core physics tool where neutronics, thermo-hydraulics, and fuel mechanics codes are run simultaneously. A relevant example of optimization of nuclear reactor safety characteristics will be presented. Also, flexibility of URANIE tool will be illustrated with the presentation of several approaches to improve Pareto front quality. (author)

  15. Integration of Titan supercomputer at OLCF with ATLAS Production System

    Science.gov (United States)

    Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to

  16. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Supercomputing plays a major role in many areas of science and engineering, and it has had tremendous impact for decades in areas such as aerospace, defense, energy,...

  17. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  18. Topics in quantum field theory

    NARCIS (Netherlands)

    Dams, C.J.F.

    2006-01-01

    In this PhD-thesis some topics in quantum field theory are considered. The first chapter gives a background to these topics. The second chapter discusses renormalization. In particular it is shown how loop calculations can be performed when using the axial gauge fixing. Fermion creation and

  19. SUPERCOMPUTERS FOR AIDING ECONOMIC PROCESSES WITH REFERENCE TO THE FINANCIAL SECTOR

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2014-12-01

    Full Text Available The article discusses the use of supercomputers to support business processes with particular emphasis on the financial sector. A reference was made to the selected projects that support economic development. In particular, we propose the use of supercomputers to perform artificial intel-ligence methods in banking. The proposed methods combined with modern technology enables a significant increase in the competitiveness of enterprises and banks by adding new functionality.

  20. Adventures in supercomputing: An innovative program for high school teachers

    Energy Technology Data Exchange (ETDEWEB)

    Oliver, C.E.; Hicks, H.R.; Summers, B.G. [Oak Ridge National Lab., TN (United States); Staten, D.G. [Wartburg Central High School, TN (United States)

    1994-12-31

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology. Adventures in Supercomputing (AiS), sponsored by the U.S. Department of Energy (DOE), is such a program. It is a program for high school teachers that changes the teacher paradigm from a teacher-directed approach of teaching to a student-centered approach. {open_quotes}A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode{close_quotes}. Not only is the process of teaching changed, but the cross-curricula integration within the AiS materials is remarkable. Written from a teacher`s perspective, this paper will describe the AiS program and its effects on teachers and students, primarily at Wartburg Central High School, in Wartburg, Tennessee. The AiS program in Tennessee is sponsored by Oak Ridge National Laboratory (ORNL).

  1. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Directory of Open Access Journals (Sweden)

    De K.

    2016-01-01

    Full Text Available The Large Hadron Collider (LHC, operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed AnalysisWorkload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF, is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF’s Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  2. Symbolic simulation of engineering systems on a supercomputer

    International Nuclear Information System (INIS)

    Ragheb, M.; Gvillo, D.; Makowitz, H.

    1986-01-01

    Model-Based Production-Rule systems for analysis are developed for the symbolic simulation of Complex Engineering systems on a CRAY X-MP Supercomputer. The Fault-Tree and Event-Tree Analysis methodologies from Systems-Analysis are used for problem representation and are coupled to the Rule-Based System Paradigm from Knowledge Engineering to provide modelling of engineering devices. Modelling is based on knowledge of the structure and function of the device rather than on human expertise alone. To implement the methodology, we developed a production-Rule Analysis System that uses both backward-chaining and forward-chaining: HAL-1986. The inference engine uses an Induction-Deduction-Oriented antecedent-consequent logic and is programmed in Portable Standard Lisp (PSL). The inference engine is general and can accommodate general modifications and additions to the knowledge base. The methodologies used will be demonstrated using a model for the identification of faults, and subsequent recovery from abnormal situations in Nuclear Reactor Safety Analysis. The use of the exposed methodologies for the prognostication of future device responses under operational and accident conditions using coupled symbolic and procedural programming is discussed

  3. Micro-mechanical Simulations of Soils using Massively Parallel Supercomputers

    Directory of Open Access Journals (Sweden)

    David W. Washington

    2004-06-01

    Full Text Available In this research a computer program, Trubal version 1.51, based on the Discrete Element Method was converted to run on a Connection Machine (CM-5,a massively parallel supercomputer with 512 nodes, to expedite the computational times of simulating Geotechnical boundary value problems. The dynamic memory algorithm in Trubal program did not perform efficiently in CM-2 machine with the Single Instruction Multiple Data (SIMD architecture. This was due to the communication overhead involving global array reductions, global array broadcast and random data movement. Therefore, a dynamic memory algorithm in Trubal program was converted to a static memory arrangement and Trubal program was successfully converted to run on CM-5 machines. The converted program was called "TRUBAL for Parallel Machines (TPM." Simulating two physical triaxial experiments and comparing simulation results with Trubal simulations validated the TPM program. With a 512 nodes CM-5 machine TPM produced a nine-fold speedup demonstrating the inherent parallelism within algorithms based on the Discrete Element Method.

  4. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  5. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Directory of Open Access Journals (Sweden)

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  6. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone; Manzano Franco, Joseph B.

    2012-12-31

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, we introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.

  7. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to

  8. Topics in Nonlinear Dynamics

    DEFF Research Database (Denmark)

    Mosekilde, Erik

    Through a significant number of detailed and realistic examples this book illustrates how the insights gained over the past couple of decades in the fields of nonlinear dynamics and chaos theory can be applied in practice. Aomng the topics considered are microbiological reaction systems, ecological...... food-web systems, nephron pressure and flow regulation, pulsatile secretion of hormones, thermostatically controlled radiator systems, post-stall maneuvering of aircrafts, transfer electron devices for microwave generation, economic long waves, human decision making behavior, and pattern formation...... in chemical reaction-diffusion systems....

  9. Credibility improves topical blog post retrieval

    NARCIS (Netherlands)

    Weerkamp, W.; de Rijke, M.

    2008-01-01

    Topical blog post retrieval is the task of ranking blog posts with respect to their relevance for a given topic. To improve topical blog post retrieval we incorporate textual credibility indicators in the retrieval process. We consider two groups of indicators: post level (determined using

  10. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  11. Interactive real-time nuclear plant simulations on a UNIX based supercomputer

    International Nuclear Information System (INIS)

    Behling, S.R.

    1990-01-01

    Interactive real-time nuclear plant simulations are critically important to train nuclear power plant engineers and operators. In addition, real-time simulations can be used to test the validity and timing of plant technical specifications and operational procedures. To accurately and confidently simulate a nuclear power plant transient in real-time, sufficient computer resources must be available. Since some important transients cannot be simulated using preprogrammed responses or non-physical models, commonly used simulation techniques may not be adequate. However, the power of a supercomputer allows one to accurately calculate the behavior of nuclear power plants even during very complex transients. Many of these transients can be calculated in real-time or quicker on the fastest supercomputers. The concept of running interactive real-time nuclear power plant transients on a supercomputer has been tested. This paper describes the architecture of the simulation program, the techniques used to establish real-time synchronization, and other issues related to the use of supercomputers in a new and potentially very important area. (author)

  12. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    International Nuclear Information System (INIS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-01-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers. (paper)

  13. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  14. Argonne National Lab deploys Force10 networks' massively dense ethernet switch for supercomputing cluster

    CERN Multimedia

    2003-01-01

    "Force10 Networks, Inc. today announced that Argonne National Laboratory (Argonne, IL) has successfully deployed Force10 E-Series switch/routers to connect to the TeraGrid, the world's largest supercomputing grid, sponsored by the National Science Foundation (NSF)" (1/2 page).

  15. Design and performance characterization of electronic structure calculations on massively parallel supercomputers

    DEFF Research Database (Denmark)

    Romero, N. A.; Glinsvad, Christian; Larsen, Ask Hjorth

    2013-01-01

    Density function theory (DFT) is the most widely employed electronic structure method because of its favorable scaling with system size and accuracy for a broad range of molecular and condensed-phase systems. The advent of massively parallel supercomputers has enhanced the scientific community...

  16. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  17. An efficient implementation of a backpropagation learning algorithm on quadrics parallel supercomputer

    International Nuclear Information System (INIS)

    Taraglio, S.; Massaioli, F.

    1995-08-01

    A parallel implementation of a library to build and train Multi Layer Perceptrons via the Back Propagation algorithm is presented. The target machine is the SIMD massively parallel supercomputer Quadrics. Performance measures are provided on three different machines with different number of processors, for two network examples. A sample source code is given

  18. Visualization environment of the large-scale data of JAEA's supercomputer system

    Energy Technology Data Exchange (ETDEWEB)

    Sakamoto, Kensaku [Japan Atomic Energy Agency, Center for Computational Science and e-Systems, Tokai, Ibaraki (Japan); Hoshi, Yoshiyuki [Research Organization for Information Science and Technology (RIST), Tokai, Ibaraki (Japan)

    2013-11-15

    On research and development of various fields of nuclear energy, visualization of calculated data is especially useful to understand the result of simulation in an intuitive way. Many researchers who run simulations on the supercomputer in Japan Atomic Energy Agency (JAEA) are used to transfer calculated data files from the supercomputer to their local PCs for visualization. In recent years, as the size of calculated data has gotten larger with improvement of supercomputer performance, reduction of visualization processing time as well as efficient use of JAEA network is being required. As a solution, we introduced a remote visualization system which has abilities to utilize parallel processors on the supercomputer and to reduce the usage of network resources by transferring data of intermediate visualization process. This paper reports a study on the performance of image processing with the remote visualization system. The visualization processing time is measured and the influence of network speed is evaluated by varying the drawing mode, the size of visualization data and the number of processors. Based on this study, a guideline for using the remote visualization system is provided to show how the system can be used effectively. An upgrade policy of the next system is also shown. (author)

  19. Neutron spectroscopy, nuclear structure, related topics. Abstracts

    International Nuclear Information System (INIS)

    Sukhovoj, A.M.

    1996-01-01

    Neutron spectroscopy, nuclear structure and related topics are considered. P, T-breaking, neutron beta decay, neutron radiative capture and neutron polarizability are discussed. Reaction with fast neutrons, methodical aspect low-energy fission are considered too

  20. Topical report review status

    International Nuclear Information System (INIS)

    1997-08-01

    This report provides industry with procedures for submitting topical reports, guidance on how the U.S. Nuclear Regulatory Commission (NRC) processes and responds to topical report submittals, and an accounting, with review schedules, of all topical reports currently accepted for review schedules, of all topical reports currently accepted for review by the NRC. This report will be published annually. Each sponsoring organization with one or more topical reports accepted for review copies

  1. Parallel simulation of tsunami inundation on a large-scale supercomputer

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the

  2. Evaluating topic models with stability

    CSIR Research Space (South Africa)

    De Waal, A

    2008-11-01

    Full Text Available Topic models are unsupervised techniques that extract likely topics from text corpora, by creating probabilistic word-topic and topic-document associations. Evaluation of topic models is a challenge because (a) topic models are often employed...

  3. Topics in supersymmetric theories

    International Nuclear Information System (INIS)

    Nemeschansky, D.D.

    1984-01-01

    This thesis discusses four different topics in supersymmetric theories. In the first part models in which supersymmetry is broken by the Fayet-Iliopoulos mechanism are considered. The possibility that scalar quark and lepton masses might arise radiatively in such theories is explored. In the second part supersymmetric grand unified models with a sliding singlet are considered. The author reviews the argument that the sliding singlet does not work in models with large supersymmetry breaking. Then he considers the possibility of using a sliding singlet with low energy supersymmetry breaking. The third part of the thesis deals with the entropy problem of supersymmetric theories. Most supersymmetric models possess a decoupled particle with mass of order 100 GeV which is copiously produced in the early universe and whose decay produces huge amounts of entropy. The author shows how this problem can be avoided in theories in which the hidden sector contains several light fields. In the fourth part effective Lagrangians for supersymmetric theories are studied. The anomalous pion interaction for supersymmetric theories is written down. General properties of this term are studied both on compact and non-compact manifolds

  4. Topics in elementary particle physics

    International Nuclear Information System (INIS)

    Dugan, M.J.

    1985-01-01

    Topics in elementary particle physics are discussed. Models with N = 2 supersymmetry are constructed. The CP violation properties of a class of N = 1 supergravity models are analyzed. The structure of a composite Higgs model is investigated. The implications of a 17 keV neutrino are considered

  5. Women's Health Topics

    Science.gov (United States)

    ... Information by Audience For Women Women's Health Topics Women's Health Topics Share Tweet Linkedin Pin it More sharing options Linkedin Pin it Email Print National Women's Health Week May 13 - 19, 2018 Join us ...

  6. Regulatory Information By Topic

    Science.gov (United States)

    EPA develops and enforces regulations that span many environmental topics, from acid rain reduction to wetlands restoration. Each topic listed below may include related laws and regulations, compliance enforcement information, policies guidance

  7. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  8. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; De, K; Oleynik, D; Jha, S; Wells, J

    2016-01-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  9. Freshman Health Topics

    Science.gov (United States)

    Hovde, Karen

    2011-01-01

    This article examines a cluster of health topics that are frequently selected by students in lower division classes. Topics address issues relating to addictive substances, including alcohol and tobacco, eating disorders, obesity, and dieting. Analysis of the topics examines their interrelationships and organization in the reference literature.…

  10. Syntacticized topics in Kurmuk

    DEFF Research Database (Denmark)

    Andersen, Torben

    2015-01-01

    This article argues that Kurmuk, a little-described Western Nilotic language, is characterized by a syntacticized topic whose grammatical relation is variable. In this language, declarative clauses have as topic an obligatory preverbal NP which is either a subject, an object or an adjunct....... The grammatical relation of the topic is expressed by a voice-like inflection of the verb, here called orientation. While subject-orientation is morphologically unmarked, object-oriented and adjunct-oriented verbs are marked by a subject suffix or by a suffix indicating that the topic is not subject, and adjunct......-orientation differs from object-orientation by a marked tone pattern. Topic choice largely reflects information structure by indicating topic continuity. The topic also plays a crucial role in relative clauses and in clauses with contrastive constituent focus, in that objects and adjuncts can only be relativized...

  11. PREFACE: CEWQO Topical Issue CEWQO Topical Issue

    Science.gov (United States)

    Bozic, Mirjana; Man'ko, Margarita

    2009-09-01

    This topical issue of Physica Scripta collects selected peer-reviewed contributions based on invited and contributed talks and posters presented at the 15th Central European Workshop on Quantum Optics (CEWQO) which took place in Belgrade 29 May-3 June 2008 (http://cewqo08.phy.bg.ac.yu). On behalf of the whole community took place in Belgrade 29 May-3 June 2008 (http://cewqo08.phy.bg.ac.yu, cewqo08.phy.bg.ac.yu). On behalf of the whole community of the workshop, we thank the referees for their careful reading and useful suggestions which helped to improve all of the submitted papers. A brief description of CEWQO The Central European Workshop on Quantum Optics is a series of conferences started informally in Budapest in 1992. Sometimes small events transform into important conferences, as in the case of CEWQO. Professor Jozsef Janszky, from the Research Institute of Solid State Physics and Optics, is the founder of this series. Margarita Man'ko obtained the following information from Jozsef Janszky during her visit to Budapest, within the framework of cooperation between the Russian and Hungarian Academies of Sciences in 2005. He organized a small workshop on quantum optics in Budapest in 1992 with John Klauder as a main speaker. Then, bearing in mind that a year before Janszky himself was invited by Vladimir Buzek to give a seminar on the same topic in Bratislava, he decided to assign the name 'Central European Workshop on Quantum Optics', considering the seminar in Bratislava to be the first workshop and the one in Budapest the second. The third formal workshop took place in Bratislava in 1993 organized by Vladimir Buzek, then in 1994 (Budapest, by Jozsef Janszky), 1995 and 1996 (Budmerice, Slovakia, by Vladimir Buzek), 1997 (Prague, by Igor Jex), 1999 (Olomouc, Czech Republic, by Zdenek Hradil), 2000 (Balatonfüred, Hungary, by Jozsef Janszky ), 2001 (Prague, by Igor Jex), 2002 (Szeged, Hungary, by Mihaly Benedict), 2003 (Rostock,Germany, by Werner Vogel and

  12. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  13. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  14. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    International Nuclear Information System (INIS)

    Cabrillo, I; Cabellos, L; Marco, J; Fernandez, J; Gonzalez, I

    2014-01-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  15. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    Science.gov (United States)

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  16. Explaining the gap between theoretical peak performance and real performance for supercomputer architectures

    International Nuclear Information System (INIS)

    Schoenauer, W.; Haefner, H.

    1993-01-01

    The basic architectures of vector and parallel computers with their properties are presented. Then the memory size and the arithmetic operations in the context of memory bandwidth are discussed. For the exemplary discussion of a single operation micro-measurements of the vector triad for the IBM 3090 VF and the CRAY Y-MP/8 are presented. They reveal the details of the losses for a single operation. Then we analyze the global performance of a whole supercomputer by identifying reduction factors that bring down the theoretical peak performance to the poor real performance. The responsibilities of the manufacturer and of the user for these losses are dicussed. Then the price-performance ratio for different architectures in a snapshot of January 1991 is briefly mentioned. Finally some remarks to a user-friendly architecture for a supercomputer will be made. (orig.)

  17. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  18. Application of Supercomputer Technologies for Simulation Of Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Vladimir Valentinovich Okrepilov

    2015-06-01

    Full Text Available To date, an extensive experience has been accumulated in investigation of problems related to quality, assessment of management systems, modeling of economic system sustainability. The performed studies have created a basis for development of a new research area — Economics of Quality. Its tools allow to use opportunities of model simulation for construction of the mathematical models adequately reflecting the role of quality in natural, technical, social regularities of functioning of the complex socio-economic systems. Extensive application and development of models, and also system modeling with use of supercomputer technologies, on our deep belief, will bring the conducted research of socio-economic systems to essentially new level. Moreover, the current scientific research makes a significant contribution to model simulation of multi-agent social systems and that is not less important, it belongs to the priority areas in development of science and technology in our country. This article is devoted to the questions of supercomputer technologies application in public sciences, first of all, — regarding technical realization of the large-scale agent-focused models (AFM. The essence of this tool is that owing to the power computer increase it has become possible to describe the behavior of many separate fragments of a difficult system, as socio-economic systems are. The article also deals with the experience of foreign scientists and practicians in launching the AFM on supercomputers, and also the example of AFM developed in CEMI RAS, stages and methods of effective calculating kernel display of multi-agent system on architecture of a modern supercomputer will be analyzed. The experiments on the basis of model simulation on forecasting the population of St. Petersburg according to three scenarios as one of the major factors influencing the development of socio-economic system and quality of life of the population are presented in the

  19. Heat dissipation computations of a HVDC ground electrode using a supercomputer

    International Nuclear Information System (INIS)

    Greiss, H.; Mukhedkar, D.; Lagace, P.J.

    1990-01-01

    This paper reports on the temperature, of soil surrounding a High Voltage Direct Current (HVDC) toroidal ground electrode of practical dimensions, in both homogeneous and non-homogeneous soils that was computed at incremental points in time using finite difference methods on a supercomputer. Curves of the response were computed and plotted at several locations within the soil in the vicinity of the ground electrode for various values of the soil parameters

  20. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Esteban [University of Pittsburgh; Ni, Xiang [University of Illinois at Urbana-Champaign; Jones, Terry R [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  1. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  2. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  3. Topical report review status

    International Nuclear Information System (INIS)

    1982-08-01

    A Topical Report Review Status is scheduled to be published semi-annually. The primary purpose of this document is to provide periodic progress reports of on-going topical report reviews, to identify those topical reports for which the Nuclear Regulatory Commission (NRC) staff review has been completed and, to the extent practicable, to provide NRC management with sufficient information regarding the conduct of the topical report program to permit taking whatever actions deemed necessary or appropriate. This document is also intended to be a source of information to NRC Licensing Project Managers and other NRC personnel regarding the status of topical reports which may be referenced in applications for which they have responsibility. This status report is published primarily for internal NRC use in managing the topical report program, but is also used by NRC to advise the industry of report review status

  4. Topical report review status

    International Nuclear Information System (INIS)

    1983-01-01

    A Topical Report Review Status is scheduled to be published semi-annually. The primary purpose of this document is to provide periodic progress reports of on-going topical report reviews, to identify those topical reports for which the Nuclear Regulatory Commission (NRC) staff review has been completed and, to the extent practicable, to provide NRC management with sufficient information regarding the conduct of the topical report program to permit taking whatever actions deemed necessary or appropriate. This document is also intended to be a source of information to NRC Licensing Project Managers and other NRC personnel regarding the status of topical reports which may be referenced in applications for which they have responsibility. This status report is published primarily for internal NRC use in managing the topical report program, but is also used by NRC to advise the industry of report review status

  5. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk (Kitware, Inc., Clifton Park, NY); Fabian, Nathan; Marion, Patrick (Kitware, Inc., Clifton Park, NY); Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  6. Should Euthanasia Be Considered Iatrogenic?

    Science.gov (United States)

    Barone, Silvana; Unguru, Yoram

    2017-08-01

    As more countries adopt laws and regulations concerning euthanasia, pediatric euthanasia has become an important topic of discussion. Conceptions of what constitutes harm to patients are fluid and highly dependent on a myriad of factors including, but not limited to, health care ethics, family values, and cultural context. Euthanasia could be viewed as iatrogenic insofar as it results in an outcome (death) that some might consider inherently negative. However, this perspective fails to acknowledge that death, the outcome of euthanasia, is not an inadvertent or preventable complication but rather the goal of the medical intervention. Conversely, the refusal to engage in the practice of euthanasia might be conceived as iatrogenic insofar as it might inadvertently prolong patient suffering. This article will explore cultural and social factors informing families', health care professionals', and society's views on pediatric euthanasia in selected countries. © 2017 American Medical Association. All Rights Reserved.

  7. Considering Student Coaching

    Science.gov (United States)

    Keen, James P.

    2014-01-01

    What does student coaching involve and what considerations make sense in deciding to engage an outside contractor to provide personal coaching? The author explores coaching in light of his own professional experience and uses this reflection as a platform from which to consider the pros and cons of student coaching when deciding whether to choose…

  8. Integration of PanDA workload management system with Titan supercomputer at OLCF

    Science.gov (United States)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  9. Re-inventing electromagnetics - Supercomputing solution of Maxwell's equations via direct time integration on space grids

    International Nuclear Information System (INIS)

    Taflove, A.

    1992-01-01

    This paper summarizes the present state and future directions of applying finite-difference and finite-volume time-domain techniques for Maxwell's equations on supercomputers to model complex electromagnetic wave interactions with structures. Applications so far have been dominated by radar cross section technology, but by no means are limited to this area. In fact, the gains we have made place us on the threshold of being able to make tremendous contributions to non-defense electronics and optical technology. Some of the most interesting research in these commercial areas is summarized. 47 refs

  10. Watson will see you now: a supercomputer to help clinicians make informed treatment decisions.

    Science.gov (United States)

    Doyle-Lindrud, Susan

    2015-02-01

    IBM has collaborated with several cancer care providers to develop and train the IBM supercomputer Watson to help clinicians make informed treatment decisions. When a patient is seen in clinic, the oncologist can input all of the clinical information into the computer system. Watson will then review all of the data and recommend treatment options based on the latest evidence and guidelines. Once the oncologist makes the treatment decision, this information can be sent directly to the insurance company for approval. Watson has the ability to standardize care and accelerate the approval process, a benefit to the healthcare provider and the patient.

  11. Diclofenac Topical (osteoarthritis pain)

    Science.gov (United States)

    ... gel (Voltaren) is used to relieve pain from osteoarthritis (arthritis caused by a breakdown of the lining ... Diclofenac topical liquid (Pennsaid) is used to relieve osteoarthritis pain in the knees. Diclofenac is in a ...

  12. Diclofenac Topical (actinic keratosis)

    Science.gov (United States)

    ... topical gel (Solaraze) is used to treat actinic keratosis (flat, scaly growths on the skin caused by ... The way diclofenac gel works to treat actinic keratosis is not known.Diclofenac is also available as ...

  13. Topics in Nuclear Astrophysics

    International Nuclear Information System (INIS)

    Chung, K.C.

    1982-01-01

    Some topics in nuclear astrophysics are discussed, e.g.: highly evolved stellar cores, stellar evolution (through the temperature analysis of stellar surface), nucleosynthesis and finally the solar neutrino problem. (L.C.) [pt

  14. Correlated Topic Vector for Scene Classification.

    Science.gov (United States)

    Wei, Pengxu; Qin, Fei; Wan, Fang; Zhu, Yi; Jiao, Jianbin; Ye, Qixiang

    2017-07-01

    Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features.

  15. The design and implementation of cost-effective algorithms for direct solution of banded linear systems on the vector processor system 32 supercomputer

    Science.gov (United States)

    Samba, A. S.

    1985-01-01

    The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.

  16. Use of QUADRICS supercomputer as embedded simulator in emergency management systems; Utilizzo del calcolatore QUADRICS come simulatore in linea in un sistema di gestione delle emergenze

    Energy Technology Data Exchange (ETDEWEB)

    Bove, R.; Di Costanzo, G.; Ziparo, A. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dip. Energia

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system.

  17. Grassroots Supercomputing

    CERN Multimedia

    Buchanan, Mark

    2005-01-01

    What started out as a way for SETI to plow through its piles or radio-signal data from deep space has turned into a powerful research tool as computer users acrosse the globe donate their screen-saver time to projects as diverse as climate-change prediction, gravitational-wave searches, and protein folding (4 pages)

  18. Topical Drugs for Pain Relief

    Directory of Open Access Journals (Sweden)

    Anjali Srinivasan

    2015-03-01

    Full Text Available Topical therapy helps patients with oral and perioral pain problems such as ulcers, burning mouth syndrome, temporomandibular disorders, neuromas, neuropathies and neuralgias. Topical drugs used in the field of dentistry are topical anaesthetics, topical analgesics, topical antibiotics and topical corticosteroids. It provides symptomatic/curative effect. Topical drugs are easy to apply, avoids hepatic first pass metabolism and more sites specific. But it can only be used for medications that require low plasma concentrations to achieve a therapeutic effect.

  19. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  20. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    Science.gov (United States)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  1. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00300320; Klimentov, Alexei; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Vaniachine, Alexandre; Wenaus, Torre; Schovancova, Jaroslava

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real time, information about unused...

  2. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration; Klimentov, Alexei; Oleynik, Danila; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently uses more than 100,000 cores at well over 100 Grid sites with a peak performance of 0.3 petaFLOPS, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real tim...

  3. Feynman diagrams sampling for quantum field theories on the QPACE 2 supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Rappl, Florian

    2016-08-01

    This work discusses the application of Feynman diagram sampling in quantum field theories. The method uses a computer simulation to sample the diagrammatic space obtained in a series expansion. For running large physical simulations powerful computers are obligatory, effectively splitting the thesis in two parts. The first part deals with the method of Feynman diagram sampling. Here the theoretical background of the method itself is discussed. Additionally, important statistical concepts and the theory of the strong force, quantum chromodynamics, are introduced. This sets the context of the simulations. We create and evaluate a variety of models to estimate the applicability of diagrammatic methods. The method is then applied to sample the perturbative expansion of the vertex correction. In the end we obtain the value for the anomalous magnetic moment of the electron. The second part looks at the QPACE 2 supercomputer. This includes a short introduction to supercomputers in general, as well as a closer look at the architecture and the cooling system of QPACE 2. Guiding benchmarks of the InfiniBand network are presented. At the core of this part, a collection of best practices and useful programming concepts are outlined, which enables the development of efficient, yet easily portable, applications for the QPACE 2 system.

  4. Use of high performance networks and supercomputers for real-time flight simulation

    Science.gov (United States)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  5. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; D' Azevedo, Eduardo [ORNL; Philip, Bobby [ORNL; Worley, Patrick H [ORNL

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  6. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  7. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, Clyde [Northern Illinois Univ., DeKalb, IL (United States); Karonis, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Lurio, Laurence [Northern Illinois Univ., DeKalb, IL (United States); Piot, Philippe [Northern Illinois Univ., DeKalb, IL (United States); Xiao, Zhili [Northern Illinois Univ., DeKalb, IL (United States); Glatz, Andreas [Northern Illinois Univ., DeKalb, IL (United States); Pohlman, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Hou, Minmei [Northern Illinois Univ., DeKalb, IL (United States); Demir, Veysel [Northern Illinois Univ., DeKalb, IL (United States); Song, Jie [Northern Illinois Univ., DeKalb, IL (United States); Duffin, Kirk [Northern Illinois Univ., DeKalb, IL (United States); Johns, Mitrick [Northern Illinois Univ., DeKalb, IL (United States); Sims, Thomas [Northern Illinois Univ., DeKalb, IL (United States); Yin, Yanbin [Northern Illinois Univ., DeKalb, IL (United States)

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  8. Computational Science with the Titan Supercomputer: Early Outcomes and Lessons Learned

    Science.gov (United States)

    Wells, Jack

    2014-03-01

    Modeling and simulation with petascale computing has supercharged the process of innovation and understanding, dramatically accelerating time-to-insight and time-to-discovery. This presentation will focus on early outcomes from the Titan supercomputer at the Oak Ridge National Laboratory. Titan has over 18,000 hybrid compute nodes consisting of both CPUs and GPUs. In this presentation, I will discuss the lessons we have learned in deploying Titan and preparing applications to move from conventional CPU architectures to a hybrid machine. I will present early results of materials applications running on Titan and the implications for the research community as we prepare for exascale supercomputer in the next decade. Lastly, I will provide an overview of user programs at the Oak Ridge Leadership Computing Facility with specific information how researchers may apply for allocations of computing resources. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  9. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; Kumar, Jitendra [ORNL; Mills, Richard T. [Argonne National Laboratory; Hoffman, Forrest M. [ORNL; Sripathi, Vamsi [Intel Corporation; Hargrove, William Walter [United States Department of Agriculture (USDA), United States Forest Service (USFS)

    2017-09-01

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like the Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.

  10. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    Science.gov (United States)

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  11. Topics in Bethe Ansatz

    Science.gov (United States)

    Wang, Chunguang

    Integrable quantum spin chains have close connections to integrable quantum field. theories, modern condensed matter physics, string and Yang-Mills theories. Bethe. ansatz is one of the most important approaches for solving quantum integrable spin. chains. At the heart of the algebraic structure of integrable quantum spin chains is. the quantum Yang-Baxter equation and the boundary Yang-Baxter equation. This. thesis focuses on four topics in Bethe ansatz. The Bethe equations for the isotropic periodic spin-1/2 Heisenberg chain with N. sites have solutions containing ±i/2 that are singular: both the corresponding energy and the algebraic Bethe ansatz vector are divergent. Such solutions must be carefully regularized. We consider a regularization involving a parameter that can be. determined using a generalization of the Bethe equations. These generalized Bethe. equations provide a practical way of determining which singular solutions correspond. to eigenvectors of the model. The Bethe equations for the periodic XXX and XXZ spin chains admit singular. solutions, for which the corresponding eigenvalues and eigenvectors are ill-defined. We use a twist regularization to derive conditions for such singular solutions to bephysical, in which case they correspond to genuine eigenvalues and eigenvectors of. the Hamiltonian. We analyze the ground state of the open spin-1/2 isotropic quantum spin chain. with a non-diagonal boundary term using a recently proposed Bethe ansatz solution. As the coefficient of the non-diagonal boundary term tends to zero, the Bethe roots. split evenly into two sets: those that remain finite, and those that become infinite. We. argue that the former satisfy conventional Bethe equations, while the latter satisfy a. generalization of the Richardson-Gaudin equations. We derive an expression for the. leading correction to the boundary energy in terms of the boundary parameters. We argue that the Hamiltonians for A(2) 2n open quantum spin chains

  12. Discriminative Relational Topic Models.

    Science.gov (United States)

    Chen, Ning; Zhu, Jun; Xia, Fei; Zhang, Bo

    2015-05-01

    Relational topic models (RTMs) provide a probabilistic generative process to describe both the link structure and document contents for document networks, and they have shown promise on predicting network structures and discovering latent topic representations. However, existing RTMs have limitations in both the restricted model expressiveness and incapability of dealing with imbalanced network data. To expand the scope and improve the inference accuracy of RTMs, this paper presents three extensions: 1) unlike the common link likelihood with a diagonal weight matrix that allows the-same-topic interactions only, we generalize it to use a full weight matrix that captures all pairwise topic interactions and is applicable to asymmetric networks; 2) instead of doing standard Bayesian inference, we perform regularized Bayesian inference (RegBayes) with a regularization parameter to deal with the imbalanced link structure issue in real networks and improve the discriminative ability of learned latent representations; and 3) instead of doing variational approximation with strict mean-field assumptions, we present collapsed Gibbs sampling algorithms for the generalized relational topic models by exploring data augmentation without making restricting assumptions. Under the generic RegBayes framework, we carefully investigate two popular discriminative loss functions, namely, the logistic log-loss and the max-margin hinge loss. Experimental results on several real network datasets demonstrate the significance of these extensions on improving prediction performance.

  13. Telecommuting. Factors to consider.

    Science.gov (United States)

    D'Arruda, K A

    2001-10-01

    1. Telecommuting is a work arrangement in which employees work part time or full time from their homes or smaller telework centers. They communicate with employers via computer. 2. Telecommuting can raise legal issues for companies. Can telecommuting be considered a reasonable accommodation under the Americans With Disabilities Act? When at home, is a worker injured within the course and scope of their employment for purposes of workers' compensation? 3. Occupational and environmental health nurses may need to alter existing programs to meet the distinct needs of telecommuters. Often, there are ergonomic issues and home office safety issues which are not of concern to other employees. Additionally, occupational and environmental health nurses may have to offer programs in new formats (e.g., Internet or Intranet programs) to effectively communicate with teleworkers.

  14. Topical botulinum toxin.

    Science.gov (United States)

    Collins, Ashley; Nasir, Adnan

    2010-03-01

    Nanotechnology is a rapidly growing discipline that capitalizes on the unique properties of matter engineered on the nanoscale. Vehicles incorporating nanotechnology have led to great strides in drug delivery, allowing for increased active ingredient stability, bioavailability, and site-specific targeting. Botulinum toxin has historically been used for the correction of neurological and neuromuscular disorders, such as torticollis, blepharospasm, and strabismus. Recent dermatological indications have been for the management of axillary hyperhydrosis and facial rhytides. Traditional methods of botulinum toxin delivery have been needle-based. These have been associated with increased pain and cost. Newer methods of botulinum toxin formulation have yielded topical preparations that are bioactive in small pilot clinical studies. While there are some risks associated with topical delivery, the refinement and standardization of delivery systems and techniques for the topical administration of botulinum toxin using nanotechnology is anticipated in the near future.

  15. Now consider diffusion

    International Nuclear Information System (INIS)

    Dungey, J.W.

    1984-01-01

    The authors want to talk about future work, but first he will reply to Stan Cowley's comment on his naivety in believing in the whole story to 99% confidence in '65, when he knew about Fairfield's results. Does it matter whether you make the right judgment about theories? Yes, it does, particularly for experimentalists perhaps, but also for theorists. The work you do later depends on the judgment you've made on previous work. People have wasted a lot of time developing on insecure or even wrong foundations. Now for future work. One mild surprise the authors have had is that they haven't heard more about diffusion, in two contexts. Gordon Rostoker is yet to come and he may talk about particles getting into the magnetosphere by diffusion. Lots of noise is observed and so diffusion must happen. If time had not been short, the authors were planning to discuss in a handwaving way what sort of diffusion mechanisms one might consider. The other aspect of diffusion he was going to talk about is at the other end of things and is velocity diffusion, which is involved in anomalous resistivity

  16. Health Topic XML File Description

    Science.gov (United States)

    ... this page: https://medlineplus.gov/xmldescription.html Health Topic XML File Description: MedlinePlus To use the sharing ... information categories assigned. Example of a Full Health Topic Record A record for a MedlinePlus health topic ...

  17. Topical Research: Africa.

    Science.gov (United States)

    Lynn, Karen

    This lesson plan can be used in social studies, language arts, or library research. The instructional objective is for students to select a topic of study relating to Africa, write a thesis statement, collect information from media sources, and develop a conclusion. The teacher may assign the lesson for written or oral evaluation. The teacher…

  18. Topics in quantum theory

    International Nuclear Information System (INIS)

    Yuille, A.L.

    1980-11-01

    Topics in the Yang-Mills theories of strong interactions and the quantum theories of gravity are examined, using the path integral approach, including; Yang-Mills instantons in curved spacetimes, Israel-Wilson metrics, Kaehler spacetimes, instantons and anti-instantons. (U.K.)

  19. Salicylic Acid Topical

    Science.gov (United States)

    ... the package label for more information.Apply a small amount of the salicylic acid product to one or two small areas you want to treat for 3 days ... know that children and teenagers who have chicken pox or the flu should not use topical salicylic ...

  20. Characters and Topical Diversity

    DEFF Research Database (Denmark)

    Eriksson, Rune

    2014-01-01

    The purpose of this article is to contribute to our understanding of the difference between the bestseller and the non-bestseller in nonfiction. It is noticed that many bestsellers in nonfiction belongs to the sub-genre of creative nonfiction, but also that the topics in this kind of literature i...

  1. Selected topics in magnetism

    CERN Document Server

    Gupta, L C

    1993-01-01

    Part of the ""Frontiers in Solid State Sciences"" series, this volume presents essays on such topics as spin fluctuations in Heisenberg magnets, quenching of spin fluctuations by high magnetic fields, and kondo effect and heavy fermions in rare earths amongst others.

  2. Nuclear safety - Topical issues

    International Nuclear Information System (INIS)

    1995-01-01

    The following topical issues related to nuclear safety are discussed: steam generators; maintenance strategies; control rod drive nozzle cracks; core shrouds cracks; sump strainer blockage; fire protection; computer software important for safety; safety during shutdown; operational safety experience; external hazards and other site related issues. 5 figs, 5 tabs

  3. Topical immunomodulators in dermatology

    Directory of Open Access Journals (Sweden)

    Khandpur Sujay

    2004-04-01

    Full Text Available Topical immunomodulators are agents that regulate the local immune response of the skin. They are now emerging as the therapy of choice for several immune-mediated dermatoses such as atopic dermatitis, contact allergic dermatitis, alopecia areata, psoriasis, vitiligo, connective tissue disorders such as morphea and lupus erythematosus, disorders of keratinization and several benign and malignant skin tumours, because of their comparable efficacy, ease of application and greater safety than their systemic counterparts. They can be used on a domiciliary basis for longer periods without aggressive monitoring. In this article, we have discussed the mechanism of action, common indications and side-effects of the commonly used topical immunomodulators, excluding topical steroids. Moreover, newer agents, which are still in the experimental stages, have also been described. A MEDLINE search was undertaken using the key words "topical immunomodulators, dermatology" and related articles were also searched. In addition, a manual search for many Indian articles, which are not indexed, was also carried out. Wherever possible, the full article was reviewed. If the full article could not be traced, the abstract was used.

  4. Differential Topic Models.

    Science.gov (United States)

    Chen, Changyou; Buntine, Wray; Ding, Nan; Xie, Lexing; Du, Lan

    2015-02-01

    In applications we may want to compare different document collections: they could have shared content but also different and unique aspects in particular collections. This task has been called comparative text mining or cross-collection modeling. We present a differential topic model for this application that models both topic differences and similarities. For this we use hierarchical Bayesian nonparametric models. Moreover, we found it was important to properly model power-law phenomena in topic-word distributions and thus we used the full Pitman-Yor process rather than just a Dirichlet process. Furthermore, we propose the transformed Pitman-Yor process (TPYP) to incorporate prior knowledge such as vocabulary variations in different collections into the model. To deal with the non-conjugate issue between model prior and likelihood in the TPYP, we thus propose an efficient sampling algorithm using a data augmentation technique based on the multinomial theorem. Experimental results show the model discovers interesting aspects of different collections. We also show the proposed MCMC based algorithm achieves a dramatically reduced test perplexity compared to some existing topic models. Finally, we show our model outperforms the state-of-the-art for document classification/ideology prediction on a number of text collections.

  5. Wavelet transform-vector quantization compression of supercomputer ocean model simulation output

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J N; Brislawn, C M

    1992-11-12

    We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.

  6. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  7. Large scale simulations of lattice QCD thermodynamics on Columbia Parallel Supercomputers

    International Nuclear Information System (INIS)

    Ohta, Shigemi

    1989-01-01

    The Columbia Parallel Supercomputer project aims at the construction of a parallel processing, multi-gigaflop computer optimized for numerical simulations of lattice QCD. The project has three stages; 16-node, 1/4GF machine completed in April 1985, 64-node, 1GF machine completed in August 1987, and 256-node, 16GF machine now under construction. The machines all share a common architecture; a two dimensional torus formed from a rectangular array of N 1 x N 2 independent and identical processors. A processor is capable of operating in a multi-instruction multi-data mode, except for periods of synchronous interprocessor communication with its four nearest neighbors. Here the thermodynamics simulations on the two working machines are reported. (orig./HSI)

  8. Reactive flow simulations in complex geometries with high-performance supercomputing

    International Nuclear Information System (INIS)

    Rehm, W.; Gerndt, M.; Jahn, W.; Vogelsang, R.; Binninger, B.; Herrmann, M.; Olivier, H.; Weber, M.

    2000-01-01

    In this paper, we report on a modern field code cluster consisting of state-of-the-art reactive Navier-Stokes- and reactive Euler solvers that has been developed on vector- and parallel supercomputers at the research center Juelich. This field code cluster is used for hydrogen safety analyses of technical systems, for example, in the field of nuclear reactor safety and conventional hydrogen demonstration plants with fuel cells. Emphasis is put on the assessment of combustion loads, which could result from slow, fast or rapid flames, including transition from deflagration to detonation. As a sample of proof tests, the special tools have been tested for specific tasks, based on the comparison of experimental and numerical results, which are in reasonable agreement. (author)

  9. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    Science.gov (United States)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  10. Research to application: Supercomputing trends for the 90's - Opportunities for interdisciplinary computations

    International Nuclear Information System (INIS)

    Shankar, V.

    1991-01-01

    The progression of supercomputing is reviewed from the point of view of computational fluid dynamics (CFD), and multidisciplinary problems impacting the design of advanced aerospace configurations are addressed. The application of full potential and Euler equations to transonic and supersonic problems in the 70s and early 80s is outlined, along with Navier-Stokes computations widespread during the late 80s and early 90s. Multidisciplinary computations currently in progress are discussed, including CFD and aeroelastic coupling for both static and dynamic flexible computations, CFD, aeroelastic, and controls coupling for flutter suppression and active control, and the development of a computational electromagnetics technology based on CFD methods. Attention is given to computational challenges standing in a way of the concept of establishing a computational environment including many technologies. 40 refs

  11. Solving sparse linear least squares problems on some supercomputers by using large dense blocks

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Ostromsky, T; Sameh, A

    1997-01-01

    technique is preferable to sparse matrix technique when the matrices are not large, because the high computational speed compensates fully the disadvantages of using more arithmetic operations and more storage. For very large matrices the computations must be organized as a sequence of tasks in each......Efficient subroutines for dense matrix computations have recently been developed and are available on many high-speed computers. On some computers the speed of many dense matrix operations is near to the peak-performance. For sparse matrices storage and operations can be saved by operating only...... and storing only nonzero elements. However, the price is a great degradation of the speed of computations on supercomputers (due to the use of indirect addresses, to the need to insert new nonzeros in the sparse storage scheme, to the lack of data locality, etc.). On many high-speed computers a dense matrix...

  12. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Directory of Open Access Journals (Sweden)

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  13. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  14. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    International Nuclear Information System (INIS)

    Delbecq, J.M.; Banner, D.

    2003-01-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  15. Development of a high performance eigensolver on the peta-scale next generation supercomputer system

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Yamada, Susumu; Machida, Masahiko

    2010-01-01

    For the present supercomputer systems, a multicore and multisocket processors are necessary to build a system, and choice of interconnection is essential. In addition, for effective development of a new code, high performance, scalable, and reliable numerical software is one of the key items. ScaLAPACK and PETSc are well-known software on distributed memory parallel computer systems. It is needless to say that highly tuned software towards new architecture like many-core processors must be chosen for real computation. In this study, we present a high-performance and high-scalable eigenvalue solver towards the next-generation supercomputer system, so called 'K-computer' system. We have developed two versions, the standard version (eigen s) and enhanced performance version (eigen sx), which are developed on the T2K cluster system housed at University of Tokyo. Eigen s employs the conventional algorithms; Householder tridiagonalization, divide and conquer (DC) algorithm, and Householder back-transformation. They are carefully implemented with blocking technique and flexible two-dimensional data-distribution to reduce the overhead of memory traffic and data transfer, respectively. Eigen s performs excellently on the T2K system with 4096 cores (theoretical peak is 37.6 TFLOPS), and it shows fine performance 3.0 TFLOPS with a two hundred thousand dimensional matrix. The enhanced version, eigen sx, uses more advanced algorithms; the narrow-band reduction algorithm, DC for band matrices, and the block Householder back-transformation with WY-representation. Even though this version is still on a test stage, it shows 4.7 TFLOPS with the same dimensional matrix on eigen s. (author)

  16. High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers

    Science.gov (United States)

    Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.

    2017-12-01

    The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and

  17. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    Science.gov (United States)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  18. Topics in industrial mathematics

    International Nuclear Information System (INIS)

    Vatsya, S.R.

    1992-01-01

    Mathematical methods are widely used to solve practical problems arising in modern industry. This article outlines some of the topics relevant to AECL programmes. This covers the applications of transmission and neutron transport tomography to determine density distributions in rocks and two phase flow situations. Another example covered is the use of variational methods to solve the problems of aerosol migration and control theory. (author). 7 refs

  19. Relativity theory - topical

    International Nuclear Information System (INIS)

    Schmutzer, E.

    1979-01-01

    Issued on the occasion of Albert Einstein's 100th birthday the book deals topically with the special and general relativity theory. The latest experiments to confirm the relativity theory are described and the historical development of the theory is presented in detail. Emphasis is given to the disclosure of deep insights into the nature of matter. Of interest to experts in physical and natural sciences and to mathematicians

  20. Topic Visualization and Survival Analysis

    OpenAIRE

    Wang, Ping Jr

    2017-01-01

    Latent semantic structure in a text collection is called a topic. In this thesis, we aim to visualize topics in the scientific literature and detect active or inactive research areas based on their lifetime. Topics were extracted from over 1 million abstracts from the arXiv.org database using Latent Dirichlet Allocation (LDA). Hellinger distance measures similarity between two topics. Topics are determined to be relevant if their pairwise distances are smaller than the threshold of Hellinger ...

  1. Topics in field theory

    CERN Document Server

    Karpilovsky, G

    1989-01-01

    This monograph gives a systematic account of certain important topics pertaining to field theory, including the central ideas, basic results and fundamental methods.Avoiding excessive technical detail, the book is intended for the student who has completed the equivalent of a standard first-year graduate algebra course. Thus it is assumed that the reader is familiar with basic ring-theoretic and group-theoretic concepts. A chapter on algebraic preliminaries is included, as well as a fairly large bibliography of works which are either directly relevant to the text or offer supplementary material of interest.

  2. Topics in CP violation

    International Nuclear Information System (INIS)

    Quinn, H.R.

    1993-02-01

    Given the varied backgrounds of the members of this audience this talk will be a grab bag of topics related to the general theme of CP Violation. I do not have time to dwell in detail on any of them. First, for the astronomers and astrophysicists among you, I want to begin by reviewing the experimental status of evidence for CP violation in particle processes. There is only one system where this has been observed, and that is in the decays of neutral K mesons

  3. Topics in CP violation

    Science.gov (United States)

    Quinn, H. R.

    1993-02-01

    Given the varied backgrounds of the members of this audience this talk will be a grab bag of topics related to the general theme of CP Violation. I do not have time to dwell in detail on any of them. First, for the astronomers and astrophysicists among you, I want to begin by reviewing the experimental status of evidence for CP violation in particle processes. There is only one system where this has been observed, and that is in the decays of neutral K mesons.

  4. Topics in Operator Theory

    CERN Document Server

    Ball, Joseph A; Helton, JWilliam; Rodman, Leiba; Spitkovsky, Iiya

    2010-01-01

    This is the first volume of a collection of original and review articles on recent advances and new directions in a multifaceted and interconnected area of mathematics and its applications. It encompasses many topics in theoretical developments in operator theory and its diverse applications in applied mathematics, physics, engineering, and other disciplines. The purpose is to bring in one volume many important original results of cutting edge research as well as authoritative review of recent achievements, challenges, and future directions in the area of operator theory and its applications.

  5. Topics on continua

    CERN Document Server

    Macias, Sergio

    2005-01-01

    Specialized as it might be, continuum theory is one of the most intriguing areas in mathematics. However, despite being popular journal fare, few books have thoroughly explored this interesting aspect of topology. In Topics on Continua, Sergio Macías, one of the field's leading scholars, presents four of his favorite continuum topics: inverse limits, Jones's set function T, homogenous continua, and n-fold hyperspaces, and in doing so, presents the most complete set of theorems and proofs ever contained in a single topology volume. Many of the results presented have previously appeared only in research papers, and some appear here for the first time. After building the requisite background and exploring the inverse limits of continua, the discussions focus on Professor Jones''s set function T and continua for which T is continuous. An introduction to topological groups and group actions lead to a proof of Effros''s Theorem, followed by a presentation of two decomposition theorems. The author then offers an...

  6. Meatotomy using topical anesthesia: A painless option

    Directory of Open Access Journals (Sweden)

    Vinod Priyadarshi

    2015-01-01

    Conclusion: Use of topical anesthesia in form of Prilox (EMLA cream for meatotomy is safe and effective method that avoids painful injections and anxiety related to it and should be considered in most of such patients as an alternative of conventional penile blocks or general anesthesia.

  7. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  8. Changing the Topic. Topic Position in Ancient Greek Word Order

    NARCIS (Netherlands)

    Allan, R.J.

    2014-01-01

    Ancient Greek, topics can be expressed as intra-clausal constituents but they can also precede or follow the main clause as extra-clausal constituents. Together, these various topic expressions constitute a coherent system of complementary pragmatic functions. For a comprehensive account of topic

  9. De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H

    2006-09-04

    We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework, high-end chemically reactive and non-reactive molecular dynamics (MD) simulations explore a wide solution space to discover microscopic mechanisms that govern macroscopic material properties, into which highly accurate quantum mechanical (QM) simulations are embedded to validate the discovered mechanisms and quantify the uncertainty of the solution. The framework includes an embedded divide-and-conquer (EDC) algorithmic framework for the design of linear-scaling simulation algorithms with minimal bandwidth complexity and tight error control. The EDC framework also enables adaptive hierarchical simulation with automated model transitioning assisted by graph-based event tracking. A tunable hierarchical cellular decomposition parallelization framework then maps the O(N) EDC algorithms onto Petaflops computers, while achieving performance tunability through a hierarchy of parameterized cell data/computation structures, as well as its implementation using hybrid Grid remote procedure call + message passing + threads programming. High-end computing platforms such as IBM BlueGene/L, SGI Altix 3000 and the NSF TeraGrid provide an excellent test grounds for the framework. On these platforms, we have achieved unprecedented scales of quantum-mechanically accurate and well validated, chemically reactive atomistic simulations--1.06 billion-atom fast reactive force-field MD and 11.8 million-atom (1.04 trillion grid points) quantum-mechanical MD in the framework of the EDC density functional theory on adaptive multigrids--in addition to 134 billion-atom non-reactive space-time multiresolution MD, with the parallel efficiency as high as 0.998 on 65,536 dual-processor BlueGene/L nodes. We have also achieved an automated execution of hierarchical QM

  10. Topics in atomic physics

    CERN Document Server

    Burkhardt, Charles E

    2006-01-01

    The study of atomic physics propelled us into the quantum age in the early twentieth century and carried us into the twenty-first century with a wealth of new and, in some cases, unexplained phenomena. Topics in Atomic Physics provides a foundation for students to begin research in modern atomic physics. It can also serve as a reference because it contains material that is not easily located in other sources. A distinguishing feature is the thorough exposition of the quantum mechanical hydrogen atom using both the traditional formulation and an alternative treatment not usually found in textbooks. The alternative treatment exploits the preeminent nature of the pure Coulomb potential and places the Lenz vector operator on an equal footing with other operators corresponding to classically conserved quantities. A number of difficult to find proofs and derivations are included as is development of operator formalism that permits facile solution of the Stark effect in hydrogen. Discussion of the classical hydrogen...

  11. Topics in mathematical biology

    CERN Document Server

    Hadeler, Karl Peter

    2017-01-01

    This book analyzes the impact of quiescent phases on biological models. Quiescence arises, for example, when moving individuals stop moving, hunting predators take a rest, infected individuals are isolated, or cells enter the quiescent compartment of the cell cycle. In the first chapter of Topics in Mathematical Biology general principles about coupled and quiescent systems are derived, including results on shrinking periodic orbits and stabilization of oscillations via quiescence. In subsequent chapters classical biological models are presented in detail and challenged by the introduction of quiescence. These models include delay equations, demographic models, age structured models, Lotka-Volterra systems, replicator systems, genetic models, game theory, Nash equilibria, evolutionary stable strategies, ecological models, epidemiological models, random walks and reaction-diffusion models. In each case we find new and interesting results such as stability of fixed points and/or periodic orbits, excitability...

  12. Topical Acne Treatments and Pregnancy

    Science.gov (United States)

    Topical Acne Treatments In every pregnancy, a woman starts out with a 3-5% chance of having a baby ... This sheet talks about whether exposure to topical acne treatments may increase the risk for birth defects ...

  13. Symbiosis: Rich, Exciting, Neglected Topic

    Science.gov (United States)

    Rowland, Jane Thomas

    1974-01-01

    Argues that the topic of symbiosis has been greatly neglected and underemphasized in general-biology textbooks. Discusses many types and examples of symbiosis, and provides an extensive bibliography of the literature related to this topic. (JR)

  14. Research center Juelich to install Germany's most powerful supercomputer new IBM System for science and research will achieve 5.8 trillion computations per second

    CERN Multimedia

    2002-01-01

    "The Research Center Juelich, Germany, and IBM today announced that they have signed a contract for the delivery and installation of a new IBM supercomputer at the Central Institute for Applied Mathematics" (1/2 page).

  15. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    Science.gov (United States)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  16. The BlueGene/L Supercomputer and Quantum ChromoDynamics

    International Nuclear Information System (INIS)

    Vranas, P; Soltz, R

    2006-01-01

    In summary our update contains: (1) Perfect speedup sustaining 19.3% of peak for the Wilson D D-slash Dirac operator. (2) Measurements of the full Conjugate Gradient (CG) inverter that inverts the Dirac operator. The CG inverter contains two global sums over the entire machine. Nevertheless, our measurements retain perfect speedup scaling demonstrating the robustness of our methods. (3) We ran on the largest BG/L system, the LLNL 64 rack BG/L supercomputer, and obtained a sustained speed of 59.1 TFlops. Furthermore, the speedup scaling of the Dirac operator and of the CG inverter are perfect all the way up to the full size of the machine, 131,072 cores (please see Figure II). The local lattice is rather small (4 x 4 x 4 x 16) while the total lattice has been a lattice QCD vision for thermodynamic studies (a total of 128 x 128 x 256 x 32 lattice sites). This speed is about five times larger compared to the speed we quoted in our submission. As we have pointed out in our paper QCD is notoriously sensitive to network and memory latencies, has a relatively high communication to computation ratio which can not be overlapped in BGL in virtual node mode, and as an application is in a class of its own. The above results are thrilling to us and a 30 year long dream for lattice QCD

  17. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Swaminarayan, Sriram [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Kadau, Kai [Los Alamos National Laboratory; Fossum, Gordon C [IBM CORPORATION

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  18. A user-friendly web portal for T-Coffee on supercomputers

    Directory of Open Access Journals (Sweden)

    Koetsier Jos

    2011-05-01

    Full Text Available Abstract Background Parallel T-Coffee (PTC was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  19. A Parallel Supercomputer Implementation of a Biological Inspired Neural Network and its use for Pattern Recognition

    International Nuclear Information System (INIS)

    De Ladurantaye, Vincent; Lavoie, Jean; Bergeron, Jocelyn; Parenteau, Maxime; Lu Huizhong; Pichevar, Ramin; Rouat, Jean

    2012-01-01

    A parallel implementation of a large spiking neural network is proposed and evaluated. The neural network implements the binding by synchrony process using the Oscillatory Dynamic Link Matcher (ODLM). Scalability, speed and performance are compared for 2 implementations: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) running on clusters of multicore supercomputers and NVIDIA graphical processing units respectively. A global spiking list that represents at each instant the state of the neural network is described. This list indexes each neuron that fires during the current simulation time so that the influence of their spikes are simultaneously processed on all computing units. Our implementation shows a good scalability for very large networks. A complex and large spiking neural network has been implemented in parallel with success, thus paving the road towards real-life applications based on networks of spiking neurons. MPI offers a better scalability than CUDA, while the CUDA implementation on a GeForce GTX 285 gives the best cost to performance ratio. When running the neural network on the GTX 285, the processing speed is comparable to the MPI implementation on RQCHP's Mammouth parallel with 64 notes (128 cores).

  20. Modeling radiative transport in ICF plasmas on an IBM SP2 supercomputer

    International Nuclear Information System (INIS)

    Johansen, J.A.; MacFarlane, J.J.; Moses, G.A.

    1995-01-01

    At the University of Wisconsin-Madison the authors have integrated a collisional-radiative-equilibrium model into their CONRAD radiation-hydrodynamics code. This integrated package allows them to accurately simulate the transport processes involved in ICF plasmas; including the important effects of self-absorption of line-radiation. However, as they increase the amount of atomic structure utilized in their transport models, the computational demands increase nonlinearly. In an attempt to meet this increased computational demand, they have recently embarked on a mission to parallelize the CONRAD program. The parallel CONRAD development is being performed on an IBM SP2 supercomputer. The parallelism is based on a message passing paradigm, and is being implemented using PVM. At the present time they have determined that approximately 70% of the sequential program can be executed in parallel. Accordingly, they expect that the parallel version will yield a speedup on the order of three times that of the sequential version. This translates into only 10 hours of execution time for the parallel version, whereas the sequential version required 30 hours

  1. Portable implementation model for CFD simulations. Application to hybrid CPU/GPU supercomputers

    Science.gov (United States)

    Oyarzun, Guillermo; Borrell, Ricard; Gorobets, Andrey; Oliva, Assensi

    2017-10-01

    Nowadays, high performance computing (HPC) systems experience a disruptive moment with a variety of novel architectures and frameworks, without any clarity of which one is going to prevail. In this context, the portability of codes across different architectures is of major importance. This paper presents a portable implementation model based on an algebraic operational approach for direct numerical simulation (DNS) and large eddy simulation (LES) of incompressible turbulent flows using unstructured hybrid meshes. The strategy proposed consists in representing the whole time-integration algorithm using only three basic algebraic operations: sparse matrix-vector product, a linear combination of vectors and dot product. The main idea is based on decomposing the nonlinear operators into a concatenation of two SpMV operations. This provides high modularity and portability. An exhaustive analysis of the proposed implementation for hybrid CPU/GPU supercomputers has been conducted with tests using up to 128 GPUs. The main objective consists in understanding the challenges of implementing CFD codes on new architectures.

  2. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Helland, B. [Ames Lab., IA (United States); Summers, B.G. [Oak Ridge National Lab., TN (United States)

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. The data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.

  3. Visualization at supercomputing centers: the tale of little big iron and the three skinny guys.

    Science.gov (United States)

    Bethel, E W; van Rosendale, J; Southard, D; Gaither, K; Childs, H; Brugger, E; Ahern, S

    2011-01-01

    Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources-the "Big Iron." Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be-that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?

  4. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan; Mills, Richard T.

    2012-04-18

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors per realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.

  5. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  6. Topics in statistical mechanics

    International Nuclear Information System (INIS)

    Elser, V.

    1984-05-01

    This thesis deals with four independent topics in statistical mechanics: (1) the dimer problem is solved exactly for a hexagonal lattice with general boundary using a known generating function from the theory of partitions. It is shown that the leading term in the entropy depends on the shape of the boundary; (2) continuum models of percolation and self-avoiding walks are introduced with the property that their series expansions are sums over linear graphs with intrinsic combinatorial weights and explicit dimension dependence; (3) a constrained SOS model is used to describe the edge of a simple cubic crystal. Low and high temperature results are derived as well as the detailed behavior near the crystal facet; (4) the microscopic model of the lambda-transition involving atomic permutation cycles is reexamined. In particular, a new derivation of the two-component field theory model of the critical behavior is presented. Results for a lattice model originally proposed by Kikuchi are extended with a high temperature series expansion and Monte Carlo simulation. 30 references

  7. Superconcentration and related topics

    CERN Document Server

    Chatterjee, Sourav

    2014-01-01

    A certain curious feature of random objects, introduced by the author as “super concentration,” and two related topics, “chaos” and “multiple valleys,” are highlighted in this book. Although super concentration has established itself as a recognized feature in a number of areas of probability theory in the last twenty years (under a variety of names), the author was the first to discover and explore its connections with chaos and multiple valleys. He achieves a substantial degree of simplification and clarity in the presentation of these findings by using the spectral approach. Understanding the fluctuations of random objects is one of the major goals of probability theory and a whole subfield of probability and analysis, called concentration of measure, is devoted to understanding these fluctuations. This subfield offers a range of tools for computing upper bounds on the orders of fluctuations of very complicated random variables. Usually, concentration of measure is useful when more direct prob...

  8. Topics in broken supersymmetry

    International Nuclear Information System (INIS)

    Lee, I.H.

    1984-01-01

    Studies on two topics in the framework of broken supersymmetry are presented. Chapter I is a brief introduction in which the motivation and the background of this work are discussed. In Chapter II, the author studies the decay K + → π + γγ in models with spontaneous supersymmetry breaking and find that it is generally suppressed relative to the decay K + → π + anti nu nu of the conventional model, except possibly for a class of models where the scalar quark masses are generated by radiative corrections from a much larger supersymmetry breaking scale. For a small range of scalar quark and photino mass parameters, the cascade decay process K + → π + π 0 → π + γγ will become dominant over the anti nu nu mode. The author also comments on the possibility of probing the neutrino mass through the K + → π + π 0 → π + anti nu nu cascade decay. Chapter III is concerned with the implications of explicit lepton number violating soft operators in a general low energy effective theory with softly broken supersymmetry

  9. Topics in field theory

    International Nuclear Information System (INIS)

    Velasco, E.S.

    1986-01-01

    This dissertation deals with several topics of field theory. Chapter I is a brief outline of the work presented in the next chapters. In chapter II, the Gauss-Bonnet-Chern theorem for manifolds with boundary is computed using the path integral representation of the Witten index for supersymmetric quantum mechanical systems. In chapter III the action of N = 2 (Poincare) supergravity is obtained in terms of N = 1 superfields. In chapter IV, N = 2 supergravity coupled to the (abelian) vector multiplet is projected into N - 1 superspace. There, the resulting set of constraints is solved in terms of unconstrained prepotential and the action in terms of N = 1 superfields is constructed. In chapter V the set of constraints for N = 2 conformal supergravity is projected into N = 1 superspace and solved in terms of N = 1 conformal supergravity fields a d matter prepotentials. In chapter VI the role of magnetic monopoles in the phase structure of the change one fixed length abelian Higgs model ins the latticer is investigated using analytic and numerical methods. The technique of monopole suppression is used to determine the phase transition lines that are monopole driven. Finally in chapter VII, the role of the charge of the Higgs field in the abelian Higgs model in the lattice is investigated

  10. Topics in inflationary cosmology

    International Nuclear Information System (INIS)

    Kahn, R.N.

    1985-01-01

    This thesis examines several topics in the theory of inflationary cosmology. It first proves the existence of Hawking Radiation during the slow-rolling period of a new inflationary universe. It then derives and somewhat extends Bardeen's gauge invariant formalism for calculating the growth of linear gravitational perturbations in a Friedmann-Robertson-Walker cosmological background. This formalism is then applied, first to several new inflationary universe models all of which show a Zel'dovich spectrum of fluctuations, but with amplitude sigma(100 4 ) above observational limits. The general formalism is next applied to models that exhibit primordial inflation. Fluctuations in these models also exhibit a Zel'dovich spectrum here with an acceptable amplitude. Finally the thesis presents the results of new, numerical calculations. A classical, (2 + 1) dimensional computer model is developed that includes a Higgs field (which drives inflation) along with enough auxiliary fields to generate dynamically not only a thermal bath, but also the fluctuations that naturally accompany that bath. The thesis ends with a discussion of future prospects

  11. Advanced verification topics

    CERN Document Server

    Bhattacharya, Bishnupriya; Hall, Gary; Heaton, Nick; Kashai, Yaron; Khan Neyaz; Kirshenbaum, Zeev; Shneydor, Efrat

    2011-01-01

    The Accellera Universal Verification Methodology (UVM) standard is architected to scale, but verification is growing and in more than just the digital design dimension. It is growing in the SoC dimension to include low-power and mixed-signal and the system integration dimension to include multi-language support and acceleration. These items and others all contribute to the quality of the SOC so the Metric-Driven Verification (MDV) methodology is needed to unify it all into a coherent verification plan. This book is for verification engineers and managers familiar with the UVM and the benefits it brings to digital verification but who also need to tackle specialized tasks. It is also written for the SoC project manager that is tasked with building an efficient worldwide team. While the task continues to become more complex, Advanced Verification Topics describes methodologies outside of the Accellera UVM standard, but that build on it, to provide a way for SoC teams to stay productive and profitable.

  12. Task-Driven Comparison of Topic Models.

    Science.gov (United States)

    Alexander, Eric; Gleicher, Michael

    2016-01-01

    Topic modeling, a method of statistically extracting thematic content from a large collection of texts, is used for a wide variety of tasks within text analysis. Though there are a growing number of tools and techniques for exploring single models, comparisons between models are generally reduced to a small set of numerical metrics. These metrics may or may not reflect a model's performance on the analyst's intended task, and can therefore be insufficient to diagnose what causes differences between models. In this paper, we explore task-centric topic model comparison, considering how we can both provide detail for a more nuanced understanding of differences and address the wealth of tasks for which topic models are used. We derive comparison tasks from single-model uses of topic models, which predominantly fall into the categories of understanding topics, understanding similarity, and understanding change. Finally, we provide several visualization techniques that facilitate these tasks, including buddy plots, which combine color and position encodings to allow analysts to readily view changes in document similarity.

  13. KEY TOPICS IN SPORTS MEDICINE

    Directory of Open Access Journals (Sweden)

    Amir Ali Narvani

    2006-12-01

    Full Text Available Key Topics in Sports Medicine is a single quick reference source for sports and exercise medicine. It presents the essential information from across relevant topic areas, and includes both the core and emerging issues in this rapidly developing field. It covers: 1 Sports injuries, rehabilitation and injury prevention, 2 Exercise physiology, fitness testing and training, 3 Drugs in sport, 4 Exercise and health promotion, 5 Sport and exercise for special and clinical populations, 6 The psychology of performance and injury. PURPOSE The Key Topics format provides extensive, concise information in an accessible, easy-to-follow manner. AUDIENCE The book is targeted the students and specialists in sports medicine and rehabilitation, athletic training, physiotherapy and orthopaedic surgery. The editors are authorities in their respective fields and this handbook depends on their extensive experience and knowledge accumulated over the years. FEATURES The book contains the information for clinical guidance, rapid access to concise details and facts. It is composed of 99 topics which present the information in an order that is considered logical and progressive as in most texts. Chapter headings are: 1. Functional Anatomy, 2. Training Principles / Development of Strength and Power, 3. Biomechanical Principles, 4. Biomechanical Analysis, 5. Physiology of Training, 6. Monitoring of Training Progress, 7. Nutrition, 8. Hot and Cold Climates, 9. Altitude, 10. Sport and Travelling, 11. Principles of Sport Injury Diagnosis, 12. Principles of Sport and Soft Tissue Management, 13. Principles of Physical Therapy and Rehabilitation, 14. Principles of Sport Injury Prevention, 15. Sports Psychology, 16. Team Sports, 17. Psychological Aspects of Injury in Sport, 18. Injury Repair Process, 19. Basic Biomechanics of Tissue Injury, 20. Plain Film Radiography in Sport, 21. Nuclear Medicine, 22. Diagnostic Ultrasound, 23. MRI Scan, 24. Other Imaging, 5. Head Injury, 26. Eye

  14. 75 FR 47310 - Solicitation for Nominations for New Clinical Preventive Health Topics To Be Considered for...

    Science.gov (United States)

    2010-08-05

    ... following set of criteria: Public health importance (burden of suffering, potential of preventive service to.../gynecology). c. Public health importance (burden of disease/suffering, potential of preventive service to... accomplishes these goals through scientific research and promotion of improvements in clinical practice...

  15. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    Science.gov (United States)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are

  16. Topic Modeling of Hierarchical Corpora /

    OpenAIRE

    Kim, Do-kyum

    2014-01-01

    The sizes of modern digital libraries have grown beyond our capacity to comprehend manually. Thus we need new tools to help us in organizing and browsing large corpora of text that do not require manually examining each document. To this end, machine learning researchers have developed topic models, statistical learning algorithms for automatic comprehension of large collections of text. Topic models provide both global and local views of a corpus; they discover topics that run through the co...

  17. Web directories as topical context

    NARCIS (Netherlands)

    Kaptein, R.; Kamps, J.; Aly, R.; Hauff, C.; den Hamer, I.; Hiemstra, D.; Huibers, T.; de Jong, F.

    2009-01-01

    In this paper we explore whether the Open Directory (or DMOZ) can be used to classify queries into topical categories on different levels and whether we can use this topical context to improve retrieval performance. We have set up a user study to let test persons explicitly classify queries into

  18. Resources for Topics in Architecture.

    Science.gov (United States)

    Van Noate, Judith, Comp.

    This guide for conducting library research on topics in architecture or on the work of a particular architect presents suggestions for utilizing four categories of resources: books, dictionaries and encyclopedias, indexes, and a periodicals and series list (PASL). Two topics are researched as examples: the contemporary architect Richard Meier, and…

  19. Programmable lithography engine (ProLE) grid-type supercomputer and its applications

    Science.gov (United States)

    Petersen, John S.; Maslow, Mark J.; Gerold, David J.; Greenway, Robert T.

    2003-06-01

    There are many variables that can affect lithographic dependent device yield. Because of this, it is not enough to make optical proximity corrections (OPC) based on the mask type, wavelength, lens, illumination-type and coherence. Resist chemistry and physics along with substrate, exposure, and all post-exposure processing must be considered too. Only a holistic approach to finding imaging solutions will accelerate yield and maximize performance. Since experiments are too costly in both time and money, accomplishing this takes massive amounts of accurate simulation capability. Our solution is to create a workbench that has a set of advanced user applications that utilize best-in-class simulator engines for solving litho-related DFM problems using distributive computing. Our product, ProLE (Programmable Lithography Engine), is an integrated system that combines Petersen Advanced Lithography Inc."s (PAL"s) proprietary applications and cluster management software wrapped around commercial software engines, along with optional commercial hardware and software. It uses the most rigorous lithography simulation engines to solve deep sub-wavelength imaging problems accurately and at speeds that are several orders of magnitude faster than current methods. Specifically, ProLE uses full vector thin-mask aerial image models or when needed, full across source 3D electromagnetic field simulation to make accurate aerial image predictions along with calibrated resist models;. The ProLE workstation from Petersen Advanced Lithography, Inc., is the first commercial product that makes it possible to do these intensive calculations at a fraction of a time previously available thus significantly reducing time to market for advance technology devices. In this work, ProLE is introduced, through model comparison to show why vector imaging and rigorous resist models work better than other less rigorous models, then some applications of that use our distributive computing solution are shown

  20. Recent advances in topical anesthesia

    Science.gov (United States)

    2016-01-01

    Topical anesthetics act on the peripheral nerves and reduce the sensation of pain at the site of application. In dentistry, they are used to control local pain caused by needling, placement of orthodontic bands, the vomiting reflex, oral mucositis, and rubber-dam clamp placement. Traditional topical anesthetics contain lidocaine or benzocaine as active ingredients and are used in the form of solutions, creams, gels, and sprays. Eutectic mixtures of local anesthesia cream, a mixture of various topical anesthetics, has been reported to be more potent than other anesthetics. Recently, new products with modified ingredients and application methods have been introduced into the market. These products may be used for mild pain during periodontal treatment, such as scaling. Dentists should be aware that topical anesthetics, although rare, might induce allergic reactions or side effects as a result of an overdose. Topical anesthetics are useful aids during dental treatment, as they reduce dental phobia, especially in children, by mitigating discomfort and pain. PMID:28879311

  1. APT accelerator. Topical report

    International Nuclear Information System (INIS)

    Lawrence, G.; Rusthoi, D.

    1995-03-01

    The Accelerator Production of Tritium (APT) project, sponsored by Department of Energy Defense Programs (DOE/DP), involves the preconceptual design of an accelerator system to produce tritium for the nation's stockpile of nuclear weapons. Tritium is an isotope of hydrogen used in nuclear weapons, and must be replenished because of radioactive decay (its half-life is approximately 12 years). Because the annual production requirements for tritium has greatly decreased since the end of the Cold War, an alternative approach to reactors for tritium production, based on a linear accelerator, is now being seriously considered. The annual tritium requirement at the time this study was undertaken (1992-1993) was 3/8 that of the 1988 goal, usually stated as 3/8-Goal. Continued reduction in the number of weapons in the stockpile has led to a revised (lower) production requirement today (March, 1995). The production requirement needed to maintain the reduced stockpile, as stated in the recent Nuclear Posture Review (summer 1994) is approximately 3/16-Goal, half the previous level. The Nuclear Posture Review also requires that the production plant be designed to accomodate a production increase (surge) to 3/8-Goal capability within five years, to allow recovery from a possible extended outage of the tritium plant. A multi-laboratory team, collaborating with several industrial partners, has developed a preconceptual APT design for the 3/8-Goal, operating at 75% capacity. The team has presented APT as a promising alternative to the reactor concepts proposed for Complex-21. Given the requirements of a reduced weapons stockpile, APT offers both significant safety, environmental, and production-fexibility advantages in comparison with reactor systems, and the prospect of successful development in time to meet the US defense requirements of the 21st Century

  2. APT accelerator. Topical report

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence, G.; Rusthoi, D. [comp.] [ed.

    1995-03-01

    The Accelerator Production of Tritium (APT) project, sponsored by Department of Energy Defense Programs (DOE/DP), involves the preconceptual design of an accelerator system to produce tritium for the nation`s stockpile of nuclear weapons. Tritium is an isotope of hydrogen used in nuclear weapons, and must be replenished because of radioactive decay (its half-life is approximately 12 years). Because the annual production requirements for tritium has greatly decreased since the end of the Cold War, an alternative approach to reactors for tritium production, based on a linear accelerator, is now being seriously considered. The annual tritium requirement at the time this study was undertaken (1992-1993) was 3/8 that of the 1988 goal, usually stated as 3/8-Goal. Continued reduction in the number of weapons in the stockpile has led to a revised (lower) production requirement today (March, 1995). The production requirement needed to maintain the reduced stockpile, as stated in the recent Nuclear Posture Review (summer 1994) is approximately 3/16-Goal, half the previous level. The Nuclear Posture Review also requires that the production plant be designed to accomodate a production increase (surge) to 3/8-Goal capability within five years, to allow recovery from a possible extended outage of the tritium plant. A multi-laboratory team, collaborating with several industrial partners, has developed a preconceptual APT design for the 3/8-Goal, operating at 75% capacity. The team has presented APT as a promising alternative to the reactor concepts proposed for Complex-21. Given the requirements of a reduced weapons stockpile, APT offers both significant safety, environmental, and production-fexibility advantages in comparison with reactor systems, and the prospect of successful development in time to meet the US defense requirements of the 21st Century.

  3. Topical melatonin for treatment of androgenetic alopecia.

    Science.gov (United States)

    Fischer, Tobias W; Trüeb, Ralph M; Hänggi, Gabriella; Innocenti, Marcello; Elsner, Peter

    2012-10-01

    In the search for alternative agents to oral finasteride and topical minoxidil for the treatment of androgenetic alopecia (AGA), melatonin, a potent antioxidant and growth modulator, was identified as a promising candidate based on in vitro and in vivo studies. One pharmacodynamic study on topical application of melatonin and four clinical pre-post studies were performed in patients with androgenetic alopecia or general hair loss and evaluated by standardised questionnaires, TrichoScan, 60-second hair count test and hair pull test. FIVE CLINICAL STUDIES SHOWED POSITIVE EFFECTS OF A TOPICAL MELATONIN SOLUTION IN THE TREATMENT OF AGA IN MEN AND WOMEN WHILE SHOWING GOOD TOLERABILITY: (1) Pharmacodynamics under once-daily topical application in the evening showed no significant influence on endogenous serum melatonin levels. (2) An observational study involving 30 men and women showed a significant reduction in the degree of severity of alopecia after 30 and 90 days (P melatonin solution can be considered as a treatment option in androgenetic alopecia.

  4. Scalable geocomputation: evolving an environmental model building platform from single-core to supercomputers

    Science.gov (United States)

    Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek

    2017-04-01

    There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework (http://www.pcraster.eu) providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using

  5. Mental Mechanisms for Topics Identification

    Directory of Open Access Journals (Sweden)

    Louis Massey

    2014-01-01

    Full Text Available Topics identification (TI is the process that consists in determining the main themes present in natural language documents. The current TI modeling paradigm aims at acquiring semantic information from statistic properties of large text datasets. We investigate the mental mechanisms responsible for the identification of topics in a single document given existing knowledge. Our main hypothesis is that topics are the result of accumulated neural activation of loosely organized information stored in long-term memory (LTM. We experimentally tested our hypothesis with a computational model that simulates LTM activation. The model assumes activation decay as an unavoidable phenomenon originating from the bioelectric nature of neural systems. Since decay should negatively affect the quality of topics, the model predicts the presence of short-term memory (STM to keep the focus of attention on a few words, with the expected outcome of restoring quality to a baseline level. Our experiments measured topics quality of over 300 documents with various decay rates and STM capacity. Our results showed that accumulated activation of loosely organized information was an effective mental computational commodity to identify topics. It was furthermore confirmed that rapid decay is detrimental to topics quality but that limited capacity STM restores quality to a baseline level, even exceeding it slightly.

  6. Topic Model for Graph Mining.

    Science.gov (United States)

    Xuan, Junyu; Lu, Jie; Zhang, Guangquan; Luo, Xiangfeng

    2015-12-01

    Graph mining has been a popular research area because of its numerous application scenarios. Many unstructured and structured data can be represented as graphs, such as, documents, chemical molecular structures, and images. However, an issue in relation to current research on graphs is that they cannot adequately discover the topics hidden in graph-structured data which can be beneficial for both the unsupervised learning and supervised learning of the graphs. Although topic models have proved to be very successful in discovering latent topics, the standard topic models cannot be directly applied to graph-structured data due to the "bag-of-word" assumption. In this paper, an innovative graph topic model (GTM) is proposed to address this issue, which uses Bernoulli distributions to model the edges between nodes in a graph. It can, therefore, make the edges in a graph contribute to latent topic discovery and further improve the accuracy of the supervised and unsupervised learning of graphs. The experimental results on two different types of graph datasets show that the proposed GTM outperforms the latent Dirichlet allocation on classification by using the unveiled topics of these two models to represent graphs.

  7. Topical steroid-damaged skin

    Directory of Open Access Journals (Sweden)

    Anil Abraham

    2014-01-01

    Full Text Available Topical steroids, commonly used for a wide range of skin disorders, are associated with side effects both systemic and cutaneous. This article aims at bringing awareness among practitioners, about the cutaneous side effects of easily available, over the counter, topical steroids. This makes it important for us as dermatologists to weigh the usefulness of topical steroids versus their side effects, and to make an informed decision regarding their use in each individual based on other factors such as age, site involved and type of skin disorder.

  8. Topics in lightwave transmission systems

    CERN Document Server

    Li, Tingye

    1991-01-01

    Topics in Lightwave Transmission Systems is a second volume of a treatise on optical fiber communications that is devoted to the science, engineering, and application of information transmission via optical fibers. The first volume, published in 1985, dealt exclusively with fiber fabrication. The present volume contains topics that pertain to subsystems and systems. The book contains five chapters and begins with discussions of transmitters and receivers, which are basic to systems now operating in the field. Subsequent chapters cover topics relating to coherent systems: frequency and phase m

  9. Topics in quantum gravity

    Energy Technology Data Exchange (ETDEWEB)

    Lamon, Raphael

    2010-06-29

    . Furthermore, we succeed in solving the quantum Gauss constraint. In the second part of the thesis we introduce some aspects of phenomenological quantum gravity and their possible detectable signatures. The goal of phenomenological quantum gravity is to derive conclusions and make predictions from expected characteristics of a full theory of quantum gravity. One possibility is an energy-dependent speed of light arising from a quantized space such that the propagation time of two photons differs. However, the amount of these corrections is very small such that only cosmological distances can be considered. Gamma-ray bursts (GRB) are ideal candidates as they are short but very luminous bursts of gamma-rays taking place at distances billions of light-years away. We study GRBs detected by the European satellite INTEGRAL and develop a new method to analyze unbinned data. A {chi}{sup 2}-test will provide a lower bound for quantum gravity corrections, which will be nevertheless well below the Planck mass. Then we shall study the sensibility of NASA's new satellite Fermi Gamma-ray Space Telescope and conclude that it is well suited to detect corrections. This prediction has just been confirmed when Fermi detected a very energetic photon emanating from GRB 090510 which highly constrains models with linear corrections to the speed of light. However, as it is shown at the end of this thesis, more bursts are needed in order to definitely falsify such models. (orig.)

  10. Topics in quantum gravity

    International Nuclear Information System (INIS)

    Lamon, Raphael

    2010-01-01

    succeed in solving the quantum Gauss constraint. In the second part of the thesis we introduce some aspects of phenomenological quantum gravity and their possible detectable signatures. The goal of phenomenological quantum gravity is to derive conclusions and make predictions from expected characteristics of a full theory of quantum gravity. One possibility is an energy-dependent speed of light arising from a quantized space such that the propagation time of two photons differs. However, the amount of these corrections is very small such that only cosmological distances can be considered. Gamma-ray bursts (GRB) are ideal candidates as they are short but very luminous bursts of gamma-rays taking place at distances billions of light-years away. We study GRBs detected by the European satellite INTEGRAL and develop a new method to analyze unbinned data. A χ 2 -test will provide a lower bound for quantum gravity corrections, which will be nevertheless well below the Planck mass. Then we shall study the sensibility of NASA's new satellite Fermi Gamma-ray Space Telescope and conclude that it is well suited to detect corrections. This prediction has just been confirmed when Fermi detected a very energetic photon emanating from GRB 090510 which highly constrains models with linear corrections to the speed of light. However, as it is shown at the end of this thesis, more bursts are needed in order to definitely falsify such models. (orig.)

  11. Topical tacrolimus for atopic dermatitis.

    Science.gov (United States)

    Cury Martins, Jade; Martins, Ciro; Aoki, Valeria; Gois, Aecio F T; Ishii, Henrique A; da Silva, Edina M K

    2015-07-01

    .98, 1 study, n = 139, low-quality evidence), but the effects were equivocal when evaluating BSA. In the comparison of tacrolimus 0.03% with moderate-to-potent corticosteroids, no difference was found in most of the outcomes measured (including physician's and participant's assessment and also for the secondary outcomes), but in two studies, a marginal benefit favouring the corticosteroid group was found for the EASI and BSA scores.Burning was more frequent in those using calcineurin inhibitors than those using corticosteroid tacrolimus 0.03% (RR 2.48, 95% CI 1.96 to 3.14, 5 studies, 1883 participants, high-quality evidence), but no difference was found for skin infections. Symptoms observed were mild and transient. The comparison between the two calcineurin inhibitors (pimecrolimus and tacrolimus) showed the same overall incidence of adverse events, but with a small difference in the frequency of local effects.Serious adverse events were rare; occurred in both the tacrolimus and corticosteroid groups; and in most cases, were considered to be unrelated to the treatment. No cases of lymphoma were noted in the included studies nor in the non-comparative studies. Cases were only noted in spontaneous reports, cohorts, and case-control studies. Systemic absorption was rarely detectable, only in low levels, and this decreased with time. Exception is made for diseases with severe barrier defects, such as Netherton's syndrome, lamellar ichthyosis, and a few others, with case reports of a higher absorption. We evaluated clinical trials; case reports; and in vivo, in vitro, and animal studies; and didn't find any evidence that topical tacrolimus could cause skin atrophy. Tacrolimus 0.1% was better than low-potency corticosteroids, pimecrolimus 1%, and tacrolimus 0.03%. Results were equivocal when comparing both dose formulations to moderate-to-potent corticosteroids. Tacrolimus 0.03% was superior to mild corticosteroids and pimecrolimus. Both tacrolimus formulations seemed to be

  12. Simulation of x-rays in refractive structure by the Monte Carlo method using the supercomputer SKIF

    International Nuclear Information System (INIS)

    Yaskevich, Yu.R.; Kravchenko, O.I.; Soroka, I.I.; Chembrovskij, A.G.; Kolesnik, A.S.; Serikova, N.V.; Petrov, P.V.; Kol'chevskij, N.N.

    2013-01-01

    Software 'Xray-SKIF' for the simulation of the X-rays in refractive structures by the Monte-Carlo method using the supercomputer SKIF BSU are developed. The program generates a large number of rays propagated from a source to the refractive structure. The ray trajectory under assumption of geometrical optics is calculated. Absorption is calculated for each ray inside of refractive structure. Dynamic arrays are used for results of calculation rays parameters, its restore the X-ray field distributions very fast at different position of detector. It was found that increasing the number of processors leads to proportional decreasing of calculation time: simulation of 10 8 X-rays using supercomputer with the number of processors from 1 to 30 run-times equal 3 hours and 6 minutes, respectively. 10 9 X-rays are calculated by software 'Xray-SKIF' which allows to reconstruct the X-ray field after refractive structure with a special resolution of 1 micron. (authors)

  13. Topic structure for information retrieval

    NARCIS (Netherlands)

    He, J.; Sanderson, M.; Zhai, C.; Zobel, J.; Allan, J.; Aslam, J.A.

    2009-01-01

    In my research, I propose a coherence measure, with the goal of discovering and using topic structures within and between documents, of which I explore its extensions and applications in information retrieval.

  14. Selected topics in nuclear structure

    International Nuclear Information System (INIS)

    1994-01-01

    The collection of abstracts on selected topics in nuclear structure are given. Special attention pays to collective excitations and high-spin states of nuclei, giant resonance structure, nuclear reaction mechanisms and so on

  15. Key Topics in Sports Medicine

    OpenAIRE

    2006-01-01

    Key Topics in Sports Medicine is a single quick reference source for sports and exercise medicine. It presents the essential information from across relevant topic areas, and includes both the core and emerging issues in this rapidly developing field. It covers: 1) Sports injuries, rehabilitation and injury prevention, 2) Exercise physiology, fitness testing and training, 3) Drugs in sport, 4) Exercise and health promotion, 5) Sport and exercise for special and clinical populations, 6) The ps...

  16. Topics of Bioengineering in Wikipedia

    Directory of Open Access Journals (Sweden)

    Vassia Atanassova

    2009-10-01

    Full Text Available The present report aims to give a snapshot of how topics from the field of bioengineering (bioinformatics, bioprocess systems, biomedical engineering, biotechnology, etc. are currently covered in the free electronic encyclopedia Wikipedia. It also offers insights and information about what Wikipedia is, how it functions, how and when to cite Wikipedian articles, if necessary. Several external wikis, devoted to topics of bioengineering, are also listed and reviewed.

  17. Topics in modern differential geometry

    CERN Document Server

    Verstraelen, Leopold

    2017-01-01

    A variety of introductory articles is provided on a wide range of topics, including variational problems on curves and surfaces with anisotropic curvature. Experts in the fields of Riemannian, Lorentzian and contact geometry present state-of-the-art reviews of their topics. The contributions are written on a graduate level and contain extended bibliographies. The ten chapters are the result of various doctoral courses which were held in 2009 and 2010 at universities in Leuven, Serbia, Romania and Spain.

  18. Coherent 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an Optimal Supercomputer Optical Switch Fabric

    DEFF Research Database (Denmark)

    Karinou, Fotini; Borkowski, Robert; Zibar, Darko

    2013-01-01

    We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates.......We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates....

  19. Selected topics of fluid mechanics

    Science.gov (United States)

    Kindsvater, Carl E.

    1958-01-01

    the Euler, Froude, Reynolds, Weber, and Cauchy numbers are defined as essential tools for interpreting and using experimental data. The derivations of the energy and momentum equations are treated in detail. One-dimensional equations for steady nonuniform flow are developed, and the restrictions applicable to the equations are emphasized. Conditions of uniform and gradually varied flow are discussed, and the origin of the Chezy equation is examined in relation to both the energy and the momentum equations. The inadequacy of all uniform-flow equations as a means of describing gradually varied flow is explained. Thus, one of the definitive problems of river hydraulics is analyzed in the light of present knowledge. This report is the outgrowth of a series of short schools conducted during the spring and summer of 1953 for engineers of the Surface Water Branch, Water Resources Division, U. S. Geological Survey. The topics considered are essentially the same as the topics selected for inclusion in the schools. However, in order that they might serve better as a guide and outline for informal study, the arrangement of the writer's original lecture notes has been considerably altered. The purpose of the report, like the purpose of the schools which inspired it, is to build a simple but strong framework of the fundamentals of fluid mechanics. It is believed that this framework is capable of supporting a detailed analysis of most of the practical problems met by the engineers of the Geological Survey. It is hoped that the least accomplishment of this work will be to inspire the reader with the confidence and desire to read more of the recent and current technical literature of modern fluid mechanics.

  20. MaMiCo: Transient multi-instance molecular-continuum flow simulation on supercomputers

    Science.gov (United States)

    Neumann, Philipp; Bian, Xin

    2017-11-01

    We present extensions of the macro-micro-coupling tool MaMiCo, which was designed to couple continuum fluid dynamics solvers with discrete particle dynamics. To enable local extraction of smooth flow field quantities especially on rather short time scales, sampling over an ensemble of molecular dynamics simulations is introduced. We provide details on these extensions including the transient coupling algorithm, open boundary forcing, and multi-instance sampling. Furthermore, we validate the coupling in Couette flow using different particle simulation software packages and particle models, i.e. molecular dynamics and dissipative particle dynamics. Finally, we demonstrate the parallel scalability of the molecular-continuum simulations by using up to 65 536 compute cores of the supercomputer Shaheen II located at KAUST. Program Files doi:http://dx.doi.org/10.17632/w7rgdrhb85.1 Licensing provisions: BSD 3-clause Programming language: C, C++ External routines/libraries: For compiling: SCons, MPI (optional) Subprograms used: ESPResSo, LAMMPS, ls1 mardyn, waLBerla For installation procedures of the MaMiCo interfaces, see the README files in the respective code directories located in coupling/interface/impl. Journal reference of previous version: P. Neumann, H. Flohr, R. Arora, P. Jarmatz, N. Tchipev, H.-J. Bungartz. MaMiCo: Software design for parallel molecular-continuum flow simulations, Computer Physics Communications 200: 324-335, 2016 Does the new version supersede the previous version?: Yes. The functionality of the previous version is completely retained in the new version. Nature of problem: Coupled molecular-continuum simulation for multi-resolution fluid dynamics: parts of the domain are resolved by molecular dynamics or another particle-based solver whereas large parts are covered by a mesh-based CFD solver, e.g. a lattice Boltzmann automaton. Solution method: We couple existing MD and CFD solvers via MaMiCo (macro-micro coupling tool). Data exchange and

  1. Three-dimensional kinetic simulations of whistler turbulence in solar wind on parallel supercomputers

    Science.gov (United States)

    Chang, Ouliang

    The objective of this dissertation is to study the physics of whistler turbulence evolution and its role in energy transport and dissipation in the solar wind plasmas through computational and theoretical investigations. This dissertation presents the first fully three-dimensional (3D) particle-in-cell (PIC) simulations of whistler turbulence forward cascade in a homogeneous, collisionless plasma with a uniform background magnetic field B o, and the first 3D PIC simulation of whistler turbulence with both forward and inverse cascades. Such computationally demanding research is made possible through the use of massively parallel, high performance electromagnetic PIC simulations on state-of-the-art supercomputers. Simulations are carried out to study characteristic properties of whistler turbulence under variable solar wind fluctuation amplitude (epsilon e) and electron beta (betae), relative contributions to energy dissipation and electron heating in whistler turbulence from the quasilinear scenario and the intermittency scenario, and whistler turbulence preferential cascading direction and wavevector anisotropy. The 3D simulations of whistler turbulence exhibit a forward cascade of fluctuations into broadband, anisotropic, turbulent spectrum at shorter wavelengths with wavevectors preferentially quasi-perpendicular to B o. The overall electron heating yields T ∥ > T⊥ for all epsilone and betae values, indicating the primary linear wave-particle interaction is Landau damping. But linear wave-particle interactions play a minor role in shaping the wavevector spectrum, whereas nonlinear wave-wave interactions are overall stronger and faster processes, and ultimately determine the wavevector anisotropy. Simulated magnetic energy spectra as function of wavenumber show a spectral break to steeper slopes, which scales as k⊥lambda e ≃ 1 independent of betae values, where lambdae is electron inertial length, qualitatively similar to solar wind observations. Specific

  2. Considering subject positions with Biesta

    OpenAIRE

    Välitalo, R. (Riku)

    2017-01-01

    Abstract People who attended the ICPIC conference last summer were given a opportunity to consider some perspectives offered by the acknowledged scholar and educational thinker, Gert Biesta. His presentation in Madrid focused on exploring the educational significance of doing philosophy with children from a particular viewpoint. Biesta addressed the question of whether Philosophy for Children (P4C) movement can offer something more than a clear head, that is, a critical, creative, caring a...

  3. Deep Unfolding for Topic Models.

    Science.gov (United States)

    Chien, Jen-Tzung; Lee, Chao-Hsi

    2018-02-01

    Deep unfolding provides an approach to integrate the probabilistic generative models and the deterministic neural networks. Such an approach is benefited by deep representation, easy interpretation, flexible learning and stochastic modeling. This study develops the unsupervised and supervised learning of deep unfolded topic models for document representation and classification. Conventionally, the unsupervised and supervised topic models are inferred via the variational inference algorithm where the model parameters are estimated by maximizing the lower bound of logarithm of marginal likelihood using input documents without and with class labels, respectively. The representation capability or classification accuracy is constrained by the variational lower bound and the tied model parameters across inference procedure. This paper aims to relax these constraints by directly maximizing the end performance criterion and continuously untying the parameters in learning process via deep unfolding inference (DUI). The inference procedure is treated as the layer-wise learning in a deep neural network. The end performance is iteratively improved by using the estimated topic parameters according to the exponentiated updates. Deep learning of topic models is therefore implemented through a back-propagation procedure. Experimental results show the merits of DUI with increasing number of layers compared with variational inference in unsupervised as well as supervised topic models.

  4. Improved Collaborative Filtering Algorithm using Topic Model

    Directory of Open Access Journals (Sweden)

    Liu Na

    2016-01-01

    Full Text Available Collaborative filtering algorithms make use of interactions rates between users and items for generating recommendations. Similarity among users or items is calculated based on rating mostly, without considering explicit properties of users or items involved. In this paper, we proposed collaborative filtering algorithm using topic model. We describe user-item matrix as document-word matrix and user are represented as random mixtures over item, each item is characterized by a distribution over users. The experiments showed that the proposed algorithm achieved better performance compared the other state-of-the-art algorithms on Movie Lens data sets.

  5. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  6. Quantum mechanics II advanced topics

    CERN Document Server

    Rajasekar, S

    2015-01-01

    Quantum Mechanics II: Advanced Topics uses more than a decade of research and the authors’ own teaching experience to expound on some of the more advanced topics and current research in quantum mechanics. A follow-up to the authors introductory book Quantum Mechanics I: The Fundamentals, this book begins with a chapter on quantum field theory, and goes on to present basic principles, key features, and applications. It outlines recent quantum technologies and phenomena, and introduces growing topics of interest in quantum mechanics. The authors describe promising applications that include ghost imaging, detection of weak amplitude objects, entangled two-photon microscopy, detection of small displacements, lithography, metrology, and teleportation of optical images. They also present worked-out examples and provide numerous problems at the end of each chapter.

  7. Topical application of hemostatic paste

    Directory of Open Access Journals (Sweden)

    Mohammad Mizanur Rahman

    2017-02-01

    Full Text Available As a measure to control minor surgical bleeding, surgeons usually depend on a number of hemostatic aids. Topical use of bovine thrombin is a widely used procedure to arrest such minor bleeding. A 35 year old male sergeant of Bangladesh Air Force presented with repeated development of hematoma in his left thigh without any history of trauma or previous history of bleeding. Critical analysis of the patient’s history, routine and sophisticated hematological investigations revealed that the patient developed anti-thrombin antibody following the application of hemostatic paste in the tooth socket five years back during minor dental procedure to stop ignorable bleeding episodes. Therefore, topical use of hemostatic glue/paste or bovine thrombin should be avoided to desist minor bleeding as recombinant human thrombin is now available for topical use.

  8. Selected topics in nuclear structure

    International Nuclear Information System (INIS)

    Solov'ev, V.G.; Gromov, K.Ya.; Malov, L.A.; Shilov, V.M.

    1994-01-01

    The Fourth International Conference on selected topics in nuclear structure was held at Dubna in July 1994 on recent experimental and theoretical investigations in nuclear structure. Topics discussed were the following: nuclear structure at low-energy excitations (collective quasiparticle phenomena, proton-neutron interactions, microscopic and phenomenological theories of nuclear structure; nuclear structure studies with charged particles. heavy ions, neutrons and photons; nuclei at high angular momenta and superdeformation, structure and decay properties of giant resonances, charge-exchange resonances and β-decay; semiclassical approach of large amplitude collective motion and structure of hot nuclei

  9. Topics in millimeter wave technology

    CERN Document Server

    Button, Kenneth

    1988-01-01

    Topics in Millimeter Wave Technology, Volume 1 presents topics related to millimeter wave technology, including fin-lines and passive components realized in fin-lines, suspended striplines, suspended substrate microstrips, and modal power exchange in multimode fibers. A miniaturized monopulse assembly constructed in planar waveguide with multimode scalar horn feeds is also described. This volume is comprised of five chapters; the first of which deals with the analysis and synthesis techniques for fin-lines as well as the various passive components realized in fin-line. Tapers, discontinuities,

  10. Selected topics in nuclear structure

    Energy Technology Data Exchange (ETDEWEB)

    Solov` ev, V G; Gromov, K Ya; Malov, L A; Shilov, V M

    1994-12-31

    The Fourth International Conference on selected topics in nuclear structure was held at Dubna in July 1994 on recent experimental and theoretical investigations in nuclear structure. Topics discussed were the following: nuclear structure at low-energy excitations (collective quasiparticle phenomena, proton-neutron interactions, microscopic and phenomenological theories of nuclear structure; nuclear structure studies with charged particles). heavy ions, neutrons and photons; nuclei at high angular momenta and superdeformation, structure and decay properties of giant resonances, charge-exchange resonances and {beta}-decay; semiclassical approach of large amplitude collective motion and structure of hot nuclei.

  11. Topics in current aerosol research

    CERN Document Server

    Hidy, G M

    1971-01-01

    Topics in Current Aerosol Research deals with the fundamental aspects of aerosol science, with emphasis on experiment and theory describing highly dispersed aerosols (HDAs) as well as the dynamics of charged suspensions. Topics covered range from the basic properties of HDAs to their formation and methods of generation; sources of electric charges; interactions between fluid and aerosol particles; and one-dimensional motion of charged cloud of particles. This volume is comprised of 13 chapters and begins with an introduction to the basic properties of HDAs, followed by a discussion on the form

  12. 2011 annual meeting on nuclear technology. Pt. 4. Topical sessions

    International Nuclear Information System (INIS)

    Schoenfelder, Christian; Dams, Wolfgang

    2011-01-01

    Summary report on the Topical Session of the Annual Conference on Nuclear Technology held in Berlin, 17 to 19 May 2011: - Nuclear Competence in Germany and Europe. The Topical Session: - Sodium Cooled Fast Reactors -- will be covered in a report in a further issue of atw. The reports on the Topical Sessions: - CFD-Simulations for Safety Relevant Tasks; and - Final Disposal: From Scientific Basis to Application; - Characteristics of a High Reliability Organization (HRO) Considering Experience Gained from Events at Nuclear Power Stations -- have been covered in atw 7, 8/9, and 10 (2011). (orig.)

  13. Modelling parking behaviour considering heterogeneity

    Energy Technology Data Exchange (ETDEWEB)

    San Martin, G.A.; Ibeas Portilla, A.; Alonso Oreña, B.; Olio, L. del

    2016-07-01

    Most of motorized trips in cities of middle and small size are made in public transport and mainly in private vehicle, this has caused a saturation in parking systems of the cities, causing important problems to society, one of the most important problems is high occupancy of public space by parking systems. Thus, is required the estimation of models that reproduce users’ behaviour when they are choosing for parking in cities, to carry out transport policies to improve transport efficiency and parking systems in the cities. The aim of this paper is the specification and estimation of models that simulate users’ behaviour when they are choosing among alternatives of parking that there are in the city: free on street parking, paid on street parking, paid on underground parking and Park and Ride (now there isn´t). For this purpose, is proposed a multinomial logit model that consider systematic and random variations in tastes. Data of users’ behaviour from the different alternatives of parking have been obtained with a stated preference surveys campaign which have been done in May 2015 in the principal parking zones of the city of Santander. In this paper, we provide a number of improvements to previously developed methodologies because of we consider much more realism to create the scenarios stated preference survey, obtaining better adjustments. (Author)

  14. Car2x with software defined networks, network functions virtualization and supercomputers technical and scientific preparations for the Amsterdam Arena telecoms fieldlab

    NARCIS (Netherlands)

    Meijer R.J.; Cushing R.; De Laat C.; Jackson P.; Klous S.; Koning R.; Makkes M.X.; Meerwijk A.

    2015-01-01

    In the invited talk 'Car2x with SDN, NFV and supercomputers' we report about how our past work with SDN [1, 2] allows the design of a smart mobility fieldlab in the huge parking lot the Amsterdam Arena. We explain how we can engineer and test software that handle the complex conditions of the Car2X

  15. MEGADOCK 4.0: an ultra-high-performance protein-protein docking software for heterogeneous supercomputers.

    Science.gov (United States)

    Ohue, Masahito; Shimoda, Takehiro; Suzuki, Shuji; Matsuzaki, Yuri; Ishida, Takashi; Akiyama, Yutaka

    2014-11-15

    The application of protein-protein docking in large-scale interactome analysis is a major challenge in structural bioinformatics and requires huge computing resources. In this work, we present MEGADOCK 4.0, an FFT-based docking software that makes extensive use of recent heterogeneous supercomputers and shows powerful, scalable performance of >97% strong scaling. MEGADOCK 4.0 is written in C++ with OpenMPI and NVIDIA CUDA 5.0 (or later) and is freely available to all academic and non-profit users at: http://www.bi.cs.titech.ac.jp/megadock. akiyama@cs.titech.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  16. A criticality safety analysis code using a vectorized Monte Carlo method on the HITAC S-810 supercomputer

    International Nuclear Information System (INIS)

    Morimoto, Y.; Maruyama, H.

    1987-01-01

    A vectorized Monte Carlo criticality safety analysis code has been developed on the vector supercomputer HITAC S-810. In this code, a multi-particle tracking algorithm was adopted for effective utilization of the vector processor. A flight analysis with pseudo-scattering was developed to reduce the computational time needed for flight analysis, which represents the bulk of computational time. This new algorithm realized a speed-up of factor 1.5 over the conventional flight analysis. The code also adopted the multigroup cross section constants library of the Bodarenko type with 190 groups, with 132 groups being for fast and epithermal regions and 58 groups being for the thermal region. Evaluation work showed that this code reproduce the experimental results to an accuracy of about 1 % for the effective neutron multiplication factor. (author)

  17. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M.; Banner, D. [Electricite de France (EDF)- R and D Division, 92 - Clamart (France)

    2003-07-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  18. Using the LANSCE irradiation facility to predict the number of fatal soft errors in one of the world's fastest supercomputers

    International Nuclear Information System (INIS)

    Michalak, S.E.; Harris, K.W.; Hengartner, N.W.; Takala, B.E.; Wender, S.A.

    2005-01-01

    Los Alamos National Laboratory (LANL) is home to the Los Alamos Neutron Science Center (LANSCE). LANSCE is a unique facility because its neutron spectrum closely mimics the neutron spectrum at terrestrial and aircraft altitudes, but is many times more intense. Thus, LANSCE provides an ideal setting for accelerated testing of semiconductor and other devices that are susceptible to cosmic ray induced neutrons. Many industrial companies use LANSCE to estimate device susceptibility to cosmic ray induced neutrons, and it has also been used to test parts from one of LANL's supercomputers, the ASC (Advanced Simulation and Computing Program) Q. This paper discusses our use of the LANSCE facility to study components in Q including a comparison with failure data from Q

  19. Evaluating the networking characteristics of the Cray XC-40 Intel Knights Landing-based Cori supercomputer at NERSC

    Energy Technology Data Exchange (ETDEWEB)

    Doerfler, Douglas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Austin, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cook, Brandon [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kandalla, Krishna [Cray Inc, Bloomington, MN (United States); Mendygral, Peter [Cray Inc, Bloomington, MN (United States)

    2017-09-12

    There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL, such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.

  20. Performance Evaluation of an Intel Haswell- and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

    Science.gov (United States)

    Saini, Subhash; Hood, Robert T.; Chang, Johnny; Baron, John

    2016-01-01

    We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5- 2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.

  1. The US considers consumer choice

    International Nuclear Information System (INIS)

    McCaughey, John.

    1996-01-01

    About half the states in the USA are seriously considering giving domestic customers the right to choose their own gas supplier as large consumers have been able to for years. This is referred to as ''unbundling''. Of the 1400 or so natural gas local distribution companies (LDCs), about one third appear to support unbundling, another third are opposed and the remainder are uncertain; small and medium sized LDCs are most likely to be opposed. A number of state regulators are also ambivalent or actively hostile to the idea. The LDCs supply consumers with gas at the price the LDC pays for it. Their profits are made from connections and the transport of as large a volume of gas as possible for which the supplier pays passthrough charges. The complex arguments as to whether unbundling will prove favourable to the LDCs and what benefits and disadvantages there may be for customers are examined. (UK)

  2. Topical phenytoin for treating pressure ulcers.

    Science.gov (United States)

    Hao, Xiang Yong; Li, Hong Ling; Su, He; Cai, Hui; Guo, Tian Kang; Liu, Ruifeng; Jiang, Lei; Shen, Yan Fei

    2017-02-22

    reduced healing. We therefore considered it to be insufficient to determine the effect of topical phenytoin on ulcer healing. One study compared topical phenytoin with triple antibiotic ointment, however, none of the outcomes of interest to this review were reported. No adverse drug reactions or interactions were detected in any of the three RCTs. Minimal pain was reported in all groups in one trial that compared topical phenytoin with hydrocolloid dressings and triple antibiotic ointment. This review has considered the available evidence and the result shows that it is uncertain whether topical phenytoin improves ulcer healing for patients with grade I and II pressure ulcers. No adverse events were reported from three small trials and minimal pain was reported in one trial. Therefore, further rigorous, adequately powered RCTs examining the effects of topical phenytoin for treating pressure ulcers, and to report on adverse events, quality of life and costs are necessary.

  3. Selected topics in grand unification

    International Nuclear Information System (INIS)

    Seckel, D.

    1983-01-01

    This dissertation is a collection of four pieces of research dealing with grand unification. The topics are neutron oscillation, CP violation, magnetic monopole abundance and distribution in neutron stars, and a proposal for an inflationary cosmology driven by stress-energy in domain walls

  4. Resources for Topics in Nursing.

    Science.gov (United States)

    Riordan, Dale B.

    This guide is intended to help the user become familiar with a selected group of reference tools and resources which are useful in nursing education and practice. It is important for students to use the correct medical or scientific terminology, understand the scope of a topic, and then utilize the tools necessary to research subjects of interest.…

  5. Seven topics in perturbative QCD

    International Nuclear Information System (INIS)

    Buras, A.J.

    1980-09-01

    The following topics of perturbative QCD are discussed: (1) deep inelastic scattering; (2) higher order corrections to e + e - annihilation, to photon structure functions and to quarkonia decays; (3) higher order corrections to fragmentation functions and to various semi-inclusive processes; (4) higher twist contributions; (5) exclusive processes; (6) transverse momentum effects; (7) jet and photon physics

  6. Selected topics in nuclear structure

    International Nuclear Information System (INIS)

    Stachura, Z.

    1984-09-01

    19. winter school in Zakopane was devoted to selected topics in nuclear structure such as: production of spin resonances, heavy ions reactions and their applications to the investigation of high spin states, octupole deformations, excited states and production of new elements etc. The experimental data are ofen compared with theoretical predictions. Report contains 28 papers. (M.F.W.)

  7. Hot Topics in Science Teaching

    Science.gov (United States)

    Ediger, Marlow

    2018-01-01

    There are vital topics in science teaching and learning which are mentioned frequently in the literature. Specialists advocate their importance in the curriculum as well as science teachers stress their saliency. Inservice education might well assist new and veteran teachers in knowledge and skills. The very best science lessons and units of…

  8. Topic Map for Authentic Travel

    OpenAIRE

    Wandsvik, Atle; Zare, Mehdi

    2007-01-01

    E-business is a new trend in Internet use. Authentic travel is an approach to travel and travel business which helps the traveler experience what is authentic in the travel destination. But how can the traveler find those small authentic spots and organize them together to compose a vacation? E-business techniques, combined withTopic Maps, can help.

  9. Two topics in quantum chromodynamics

    International Nuclear Information System (INIS)

    Bjorken, J.D.

    1989-12-01

    The two topics are (1) estimates of perturbation theory coefficients for R(e + e - → hadrons), and (2) the virtual-photon structure function, with emphasis on the analytic behavior in its squared mass. 20 refs., 4 figs., 2 tabs

  10. Topical Session on Materials Management

    International Nuclear Information System (INIS)

    2002-01-01

    At its second meeting, in Paris, 5-7 December 2001, the WPDD held two topical sessions on the D and D Safety Case and on the Management of Materials from D and D, respectively. This report documents the topical session on the management of materials. Presentations during the topical session covered key aspects of the management of materials and meant to provide an exchange of information and experience, including: Experience and lessons learnt from VLLW and non-radioactive material management in Spain and Germany with special attention to recycling (How specific solutions came about? Are there 'generic' examples for wider adoption?); Risk assessment of recycling and non-recycling: a CPD study; Waste acceptance issues within different national contexts (What constraints are there on the waste receiving body and what flexibility can the latter have? What constraints does this impose on D and D implementers? What about wastes are without current solution? What needs to be done? What about large items and 'difficult' waste in general?); Radiological characterisation of materials during decommissioning, particularly difficult situations - large volumes, large items,.. wastes, heterogeneous streams (What examples of established practice? What are the approaches or aspects that set the regulatory requirements? How can the flow rates be large but the answers acceptable? How much is needed to be known for later action, e. g., disposal, release, protection of worker, etc.); Radiological characterisation of buildings as they stand, in order to allow conventional demolition (What are strategies for optimisation of characterisation? How much needs to be known to take action later? e.g. for storage, disposal, release, cost estimation and ALARA? What needs to be done in advance and after decommissioning/dismantling?). At the end of each presentation time was allotted for discussion of the paper. Integral to the Topical Session was a facilitated plenary discussion on the topical

  11. LDRD final report : a lightweight operating system for multi-core capability class supercomputers.

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Suzanne Marie; Hudson, Trammell B. (OS Research); Ferreira, Kurt Brian; Bridges, Patrick G. (University of New Mexico); Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.; Brightwell, Ronald Brian

    2010-09-01

    The two primary objectives of this LDRD project were to create a lightweight kernel (LWK) operating system(OS) designed to take maximum advantage of multi-core processors, and to leverage the virtualization capabilities in modern multi-core processors to create a more flexible and adaptable LWK environment. The most significant technical accomplishments of this project were the development of the Kitten lightweight kernel, the co-development of the SMARTMAP intra-node memory mapping technique, and the development and demonstration of a scalable virtualization environment for HPC. Each of these topics is presented in this report by the inclusion of a published or submitted research paper. The results of this project are being leveraged by several ongoing and new research projects.

  12. Topical cyclosporine for atopic keratoconjunctivitis.

    Science.gov (United States)

    González-López, Julio J; López-Alcalde, Jesús; Morcillo Laiz, Rafael; Fernández Buenaga, Roberto; Rebolleda Fernández, Gema

    2012-09-12

    Atopic keratoconjunctivitis (AKC) is a chronic ocular surface non-infectious inflammatory condition that atopic dermatitis patients may suffer at any time point in the course of their dermatologic disease and is independent of its degree of severity. AKC is usually not self resolving and it poses a higher risk of corneal injuries and severe sequelae. Management of AKC should prevent or treat corneal damage. Although topical corticosteroids remain the standard treatment for patients with AKC, prolonged use may lead to complications. Topical cyclosporine A (CsA) may improve AKC signs and symptoms, and be used as a corticosteroid sparing agent. To determine the efficacy and gather evidence on safety from randomised controlled trials (RCTs) of topical CsA in patients with AKC. We searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (The Cochrane Library 2012, Issue 6), MEDLINE (January 1946 to July 2012), EMBASE (January 1980 to July 2012), Latin American and Caribbean Literature on Health Sciences (LILACS) (January 1982 to July 2012), Cumulative Index to Nursing and Allied Health Literature (CINAHL) (January 1937 to July 2012), OpenGrey (System for Information on Grey Literature in Europe) (www.opengrey.eu/), the metaRegister of Controlled Trials (mRCT) (www.controlled-trials.com), ClinicalTrials.gov (www.clinicaltrials.gov), the WHO International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en), the IFPMA Clinical Trials Portal (http://clinicaltrials.ifpma.org/no_cache/en/myportal/index.htm) and Web of Science Conference Proceedings Citation Index- Science (CPCI-S). We did not use any date or language restrictions in the electronic searches for trials. The electronic databases were last searched on 9 July 2012. We also handsearched the following conference proceedings: American Academy of Ophthalmology, Association for Research in Vision and Ophthalmology, International Council of Opthalmology and Societas

  13. Topical antifungal agents: an update.

    Science.gov (United States)

    Diehl, K B

    1996-10-01

    So many topical antifungal agents have been introduced that it has become very difficult to select the proper agent for a given infection. Nonspecific agents have been available for many years, and they are still effective in many situations. These agents include Whitfield's ointment, Castellani paint, gentian violet, potassium permanganate, undecylenic acid and selenium sulfide. Specific antifungal agents include, among others, the polyenes (nystatin, amphotericin B), the imidazoles (metronidazole, clotrimazole) and the allylamines (terbinafine, naftifine). Although the choice of an antifungal agent should be based on an accurate diagnosis, many clinicians believe that topical miconazole is a relatively effective agent for the treatment of most mycotic infections. Terbinafine and other newer drugs have primary fungicidal effects. Compared with older antifungal agents, these newer drugs can be used in lower concentrations and shorter therapeutic courses. Studies are needed to evaluate the clinical efficacies and cost advantages of both newer and traditional agents.

  14. Topics in computational linear optimization

    DEFF Research Database (Denmark)

    Hultberg, Tim Helge

    2000-01-01

    Linear optimization has been an active area of research ever since the pioneering work of G. Dantzig more than 50 years ago. This research has produced a long sequence of practical as well as theoretical improvements of the solution techniques avilable for solving linear optimization problems...... of high quality solvers and the use of algebraic modelling systems to handle the communication between the modeller and the solver. This dissertation features four topics in computational linear optimization: A) automatic reformulation of mixed 0/1 linear programs, B) direct solution of sparse unsymmetric...... systems of linear equations, C) reduction of linear programs and D) integration of algebraic modelling of linear optimization problems in C++. Each of these topics is treated in a separate paper included in this dissertation. The efficiency of solving mixed 0-1 linear programs by linear programming based...

  15. Do scientists trace hot topics?

    Science.gov (United States)

    Wei, Tian; Li, Menghui; Wu, Chensheng; Yan, Xiao-Yong; Fan, Ying; Di, Zengru; Wu, Jinshan

    2013-01-01

    Do scientists follow hot topics in their scientific investigations? In this paper, by performing analysis to papers published in the American Physical Society (APS) Physical Review journals, it is found that papers are more likely to be attracted by hot fields, where the hotness of a field is measured by the number of papers belonging to the field. This indicates that scientists generally do follow hot topics. However, there are qualitative differences among scientists from various countries, among research works regarding different number of authors, different number of affiliations and different number of references. These observations could be valuable for policy makers when deciding research funding and also for individual researchers when searching for scientific projects.

  16. Recent topics in nonlinear PDE

    International Nuclear Information System (INIS)

    Mimura, Masayasu; Nishida, Takaaki

    1984-01-01

    The meeting on the subject of nonlinear partial differential equations was held at Hiroshima University in February, 1983. Leading and active mathematicians were invited to talk on their current research interests in nonlinear pdes occuring in the areas of fluid dynamics, free boundary problems, population dynamics and mathematical physics. This volume contains the theory of nonlinear pdes and the related topics which have been recently developed in Japan. (Auth.)

  17. Probabilistic analysis and related topics

    CERN Document Server

    Bharucha-Reid, A T

    1983-01-01

    Probabilistic Analysis and Related Topics, Volume 3 focuses on the continuity, integrability, and differentiability of random functions, including operator theory, measure theory, and functional and numerical analysis. The selection first offers information on the qualitative theory of stochastic systems and Langevin equations with multiplicative noise. Discussions focus on phase-space evolution via direct integration, phase-space evolution, linear and nonlinear systems, linearization, and generalizations. The text then ponders on the stability theory of stochastic difference systems and Marko

  18. Topics in clinical oncology. 15

    International Nuclear Information System (INIS)

    Cepcek, P.

    1987-12-01

    The monograph comprising primarily papers on topical subjects of oncology and cancer research, contains also a selection of papers presented at the 2. Congress of the Czechoslovak Society of Nuclear Medicine and Radiation Hygiene. Seven papers were selected on behalf of their subject related to clinical oncology. All of them were iputted in INIS; five of them deal with the scintiscanning of the skeleton of cancer patients, one with radioimmunodetection of tumors, and one with radionuclide lymphography. (A.K.)

  19. Probabilistic analysis and related topics

    CERN Document Server

    Bharucha-Reid, A T

    1979-01-01

    Probabilistic Analysis and Related Topics, Volume 2 focuses on the integrability, continuity, and differentiability of random functions, as well as functional analysis, measure theory, operator theory, and numerical analysis.The selection first offers information on the optimal control of stochastic systems and Gleason measures. Discussions focus on convergence of Gleason measures, random Gleason measures, orthogonally scattered Gleason measures, existence of optimal controls without feedback, random necessary conditions, and Gleason measures in tensor products. The text then elaborates on an

  20. Stochastic Analysis and Related Topics

    CERN Document Server

    Ustunel, Ali

    1988-01-01

    The Silvri Workshop was divided into a short summer school and a working conference, producing lectures and research papers on recent developments in stochastic analysis on Wiener space. The topics treated in the lectures relate to the Malliavin calculus, the Skorohod integral and nonlinear functionals of white noise. Most of the research papers are applications of these subjects. This volume addresses researchers and graduate students in stochastic processes and theoretical physics.

  1. New guidelines for topical NSAIDs in the osteoarthritis treatment paradigm.

    Science.gov (United States)

    Altman, Roy D

    2010-12-01

    Osteoarthritis (OA), the most common form of arthritis, often affects hands, hips, and knees and involves an estimated 26.9 million US adults. Women have a higher prevalence of OA, and the risk of developing OA increases with age, obesity, and joint malalignment. OA typically presents with pain and reduced function. Therapeutic programs are often multimodal and must take into account pharmaceutical toxicities and patient comorbidities. For example, nonsteroidal anti-inflammatory drugs (NSAIDs) are associated with cardiovascular, gastrointestinal, and renal adverse events. Topical NSAIDs offer efficacy with reduced systemic drug exposure. This is a review of current guideline recommendations regarding the use of topical NSAIDs in OA of the hand and knee. Articles were identified by PubMed search (January 1, 2000 to May 21, 2010). Several current guidelines for management of OA recommend topical NSAIDs, indicating them as a safe and effective treatment. One guideline recommends that topical NSAIDs be considered as first-line pharmacologic therapy. A US guideline for knee OA recommends topical NSAIDs in older patients and in patients with increased gastrointestinal risk. The consensus across US and European OA guidelines is that topical NSAIDs are a safe and effective treatment for OA. Because the research base on topical NSAIDs for OA is small, guidelines will continue to evolve.

  2. Link-topic model for biomedical abbreviation disambiguation.

    Science.gov (United States)

    Kim, Seonho; Yoon, Juntae

    2015-02-01

    The ambiguity of biomedical abbreviations is one of the challenges in biomedical text mining systems. In particular, the handling of term variants and abbreviations without nearby definitions is a critical issue. In this study, we adopt the concepts of topic of document and word link to disambiguate biomedical abbreviations. We newly suggest the link topic model inspired by the latent Dirichlet allocation model, in which each document is perceived as a random mixture of topics, where each topic is characterized by a distribution over words. Thus, the most probable expansions with respect to abbreviations of a given abstract are determined by word-topic, document-topic, and word-link distributions estimated from a document collection through the link topic model. The model allows two distinct modes of word generation to incorporate semantic dependencies among words, particularly long form words of abbreviations and their sentential co-occurring words; a word can be generated either dependently on the long form of the abbreviation or independently. The semantic dependency between two words is defined as a link and a new random parameter for the link is assigned to each word as well as a topic parameter. Because the link status indicates whether the word constitutes a link with a given specific long form, it has the effect of determining whether a word forms a unigram or a skipping/consecutive bigram with respect to the long form. Furthermore, we place a constraint on the model so that a word has the same topic as a specific long form if it is generated in reference to the long form. Consequently, documents are generated from the two hidden parameters, i.e. topic and link, and the most probable expansion of a specific abbreviation is estimated from the parameters. Our model relaxes the bag-of-words assumption of the standard topic model in which the word order is neglected, and it captures a richer structure of text than does the standard topic model by considering

  3. Tactile friction of topical formulations.

    Science.gov (United States)

    Skedung, L; Buraczewska-Norin, I; Dawood, N; Rutland, M W; Ringstad, L

    2016-02-01

    The tactile perception is essential for all types of topical formulations (cosmetic, pharmaceutical, medical device) and the possibility to predict the sensorial response by using instrumental methods instead of sensory testing would save time and cost at an early stage product development. Here, we report on an instrumental evaluation method using tactile friction measurements to estimate perceptual attributes of topical formulations. Friction was measured between an index finger and an artificial skin substrate after application of formulations using a force sensor. Both model formulations of liquid crystalline phase structures with significantly different tactile properties, as well as commercial pharmaceutical moisturizing creams being more tactile-similar, were investigated. Friction coefficients were calculated as the ratio of the friction force to the applied load. The structures of the model formulations and phase transitions as a result of water evaporation were identified using optical microscopy. The friction device could distinguish friction coefficients between the phase structures, as well as the commercial creams after spreading and absorption into the substrate. In addition, phase transitions resulting in alterations in the feel of the formulations could be detected. A correlation was established between skin hydration and friction coefficient, where hydrated skin gave rise to higher friction. Also a link between skin smoothening and finger friction was established for the commercial moisturizing creams, although further investigations are needed to analyse this and correlations with other sensorial attributes in more detail. The present investigation shows that tactile friction measurements have potential as an alternative or complement in the evaluation of perception of topical formulations. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Topic extraction from adverbial clauses

    Directory of Open Access Journals (Sweden)

    Carlos Rubio Alcalá

    2016-06-01

    Full Text Available This paper offers new data to support findings about Topic extraction from adverbial clauses. Since such clauses are strong islands, they should not allow extraction of any kind, but we show here that if the appropriate conditions are met, Topics of the CLLD kind in Romance can move out of them. We propose that two conditions must be met for such movement to be possible: the first is that the adverbial clause must have undergone topicalisation in the first place; the second is that the adverbial clause is inherently topical from a semantic viewpoint. Contrast with other language families (Germanic, Quechua and Japanese is provided and the semantic implications of the proposal are briefly discussed. Keywords: topicalisation; Clitic Left Dislocation; syntactic islands; adverbial clauses Este artículo ofrece nuevos datos sobre la extracción de Tópicos desde oraciones subordinadas adverbiales. Dado que dichas oraciones son islas fuertes, no deberían permitir extracción de ningún tipo, pero mostramos que si se dan las condiciones apropiadas, los Tópicos del tipo CLLD en lenguas románicas pueden desplazarse fuera de ellas. Proponemos que se deben cumplir dos condiciones para que ese movimiento sea posible: la primera es que la propia subordinada adverbial se haya topicalizado en primer lugar; la segunda es que la subordinada adverbial sea inherentemente un Tópico desde el punto de vista semántico. Proporcionamos también algunos contrastes con otras familias lingüísticas (germánica, quechua y japonés y se discuten brevemente las implicaciones semánticas de la propuesta. Palabras clave: topicalización; dislocación a la izquierda con clítico; islas sintácticas; oraciones adverbiales

  5. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry; Etienne, Vincent; Gashawbeza, Ewenet; Curiel, Emesto Sandoval; Khan, Azizur; Feki, Saber; Kortas, Samuel

    2017-01-01

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey's acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less than

  6. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry

    2017-02-27

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey\\'s acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less

  7. Modern topics in electron scattering

    CERN Document Server

    Frois, Bernard

    1991-01-01

    This book summarizes the considerable progress recently achieved in the understanding of nucleon and nuclear structure by using high energy electrons as a probe. A collection of papers discusses in detail the new frontiers of this field. Experimental and theoretical articles cover topics such as the structure of the nucleon, nucleon distributions, many-body correlations, non-nucleonic degrees of freedom and few-body systems. This book is an up-to-date introduction to the research planned with continuous beam electron accelerators.

  8. Topics in deep inelastic scattering

    International Nuclear Information System (INIS)

    Wandzura, S.M.

    1977-01-01

    Several topics in deep inelastic lepton--nucleon scattering are discussed, with emphasis on the structure functions appearing in polarized experiments. The major results are: infinite set of new sum rules reducing the number of independent spin dependent structure functions (for electroproduction) from two to one; the application of the techniques of Nachtmann to extract the coefficients appearing in the Wilson operator product expansion; and radiative corrections to the Wilson coefficients of free field theory. Also discussed are the use of dimensional regularization to simplify the calculation of these radiative corrections

  9. Topics in conformal field theory

    International Nuclear Information System (INIS)

    Kiritsis, E.B.

    1988-01-01

    In this work two major topics in Conformal Field Theory are discussed. First a detailed investigation of N = 2 Superconformal theories is presented. The structure of the representations of the N = 2 superconformal algebras is investigated and the character formulae are calculated. The general structure of N = 2 superconformal theories is elucidated and the operator algebra of the minimal models is derived. The first minimal system is discussed in more detail. Second, applications of the conformal techniques are studied in the Ashkin-Teller model. The c = 1 as well as the c = 1/2 critical lines are discussed in detail

  10. Topics in atomic collision theory

    CERN Document Server

    Geltman, Sydney; Brueckner, Keith A

    1969-01-01

    Topics in Atomic Collision Theory originated in a course of graduate lectures given at the University of Colorado and at University College in London. It is recommended for students in physics and related fields who are interested in the application of quantum scattering theory to low-energy atomic collision phenomena. No attention is given to the electromagnetic, nuclear, or elementary particle domains. The book is organized into three parts: static field scattering, electron-atom collisions, and atom-atom collisions. These are in the order of increasing physical complexity and hence necessar

  11. Topics on Electricity Transmission Pricing

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerndal, Mette

    2000-02-01

    Within the last decade we have experienced deregulation of several industries, such as airlines, telecommunications and the electric utility industry, the last-mentioned being the focus of this work. Both the telecommunications and the electricity sector depend on network facilities, some of which are still considered as natural monopolies. In these industries, open network access is regarded as crucial in order to achieve the gains from increased competition, and transmission tariffs are important in implementing this. Based on the Energy Act that was introduced in 1991, Norway was among the first countries to restructure its electricity sector. On the supply side there are a large number of competing firms, almost exclusively hydro plants, with a combined capacity of about 23000 MW, producing 105-125 TWh per year, depending on the availability of water. Hydro plants are characterized by low variable costs of operation, however since water may be stored in dams, water has an opportunity cost, generally known as the water value, which is the shadow price of water when solving the generator's inter temporal profit maximization problem. Water values are the main factor of the producers' short run marginal cost. Total consumption amounts to 112-117 TWh a year, and consumers, even households, may choose their electricity supplier independent of the local distributor to which the customer is connected. In fact, approximately 10% of the households have actually changed supplier. The web-site www.konkurransetilsynet.no indicates available contracts, and www.dinside.no provides an ''energy-calculator'' where one can check whether it is profitable to switch supplier. If a customer buys energy from a remote supplier, the local distributor only provides transportation facilities for the energy and is compensated accordingly. Transmission and distribution have remained monopolized and regulated by the Norwegian Water Resources and Energy

  12. Topics in Electricity Transmission Pricing

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerndal, Mette

    2000-02-01

    Within the last decade we have experienced deregulation of several industries, such as airlines, telecommunications and the electric utility industry, the last-mentioned being the focus of this work. Both the telecommunications and the electricity sector depend on network facilities, some of which are still considered as natural monopolies. In these industries, open network access is regarded as crucial in order to achieve the gains from increased competition, and transmission tariffs are important in implementing this. Based on the Energy Act that was introduced in 1991, Norway was among the first countries to restructure its electricity sector. On the supply side there are a large number of competing firms, almost exclusively hydro plants, with a combined capacity of about 23000 MW, producing 105-125 TWh per year, depending on the availability of water. Hydro plants are characterized by low variable costs of operation, however since water may be stored in dams, water has an opportunity cost, generally known as the water value, which is the shadow price of water when solving the generator's inter temporal profit maximization problem. Water values are the main factor of the producers' short run marginal cost. Total consumption amounts to 112-117 TWh a year, and consumers, even households, may choose their electricity supplier independent of the local distributor to which the customer is connected. In fact, approximately 10% of the households have actually changed supplier. The web-site www.konkurransetilsynet.no indicates available contracts, and www.dinside.no provides an ''energy-calculator'' where one can check whether it is profitable to switch supplier. If a customer buys energy from a remote supplier, the local distributor only provides transportation facilities for the energy and is compensated accordingly. Transmission and distribution have remained monopolized and regulated by the Norwegian Water Resources and Energy

  13. Topics on Electricity Transmission Pricing

    International Nuclear Information System (INIS)

    Bjoerndal, Mette

    2000-02-01

    Within the last decade we have experienced deregulation of several industries, such as airlines, telecommunications and the electric utility industry, the last-mentioned being the focus of this work. Both the telecommunications and the electricity sector depend on network facilities, some of which are still considered as natural monopolies. In these industries, open network access is regarded as crucial in order to achieve the gains from increased competition, and transmission tariffs are important in implementing this. Based on the Energy Act that was introduced in 1991, Norway was among the first countries to restructure its electricity sector. On the supply side there are a large number of competing firms, almost exclusively hydro plants, with a combined capacity of about 23000 MW, producing 105-125 TWh per year, depending on the availability of water. Hydro plants are characterized by low variable costs of operation, however since water may be stored in dams, water has an opportunity cost, generally known as the water value, which is the shadow price of water when solving the generator's inter temporal profit maximization problem. Water values are the main factor of the producers' short run marginal cost. Total consumption amounts to 112-117 TWh a year, and consumers, even households, may choose their electricity supplier independent of the local distributor to which the customer is connected. In fact, approximately 10% of the households have actually changed supplier. The web-site www.konkurransetilsynet.no indicates available contracts, and www.dinside.no provides an ''energy-calculator'' where one can check whether it is profitable to switch supplier. If a customer buys energy from a remote supplier, the local distributor only provides transportation facilities for the energy and is compensated accordingly. Transmission and distribution have remained monopolized and regulated by the Norwegian Water Resources and Energy Directorate (NVE). To prevent cross

  14. Topical tar: Back to the future

    Energy Technology Data Exchange (ETDEWEB)

    Paghdal, K.V.; Schwartz, R.A. [University of Medicine & Dentistry of New Jersey, Newark, NJ (United States)

    2009-08-15

    The use of medicinal tar for dermatologic disorders dates back to the ancient times. Although coal tar is utilized more frequently in modern dermatology, wood tars have also been widely employed. Tar is used mainly in the treatment of chronic stable plaque psoriasis, scalp psoriasis, atopic dermatitis, and seborrheic dermatitis, either alone or in combination therapy with other medications, phototherapy, or both. Many modifications have been made to tar preparations to increase their acceptability, as some dislike its odor, messy application, and staining of clothing. One should consider a tried and true treatment with tar that has led to clearing of lesions and prolonged remission times. Occupational studies have demonstrated the carcinogenicity of tar; however, epidemiologic studies do not confirm similar outcomes when used topically. This article will review the pharmacology, formulations, efficacy, and adverse effects of crude coal tar and other tars in the treatment of selected dermatologic conditions.

  15. Theoretical topics in particle physics

    International Nuclear Information System (INIS)

    Roberts, L.A.

    1986-01-01

    This dissertation contains three parts, each with a distinct topic. The three topics are (1) Higgs-boson decays at the superconducting supercollider, (2) radiative corrections to the decay π 0 → γe + e - and (3) generalized random paths in three and four dimensions. In part I, distributions in cos(theta)/sub lab/, rapidity, energy, and p/sub T/ for the intermediate vector bosons resulting from p + p → (H 0 → W + W - , Z 0 Z 0 ) + X and p + p → (W + W - , W + Z 0 + W - Z 0 ,Z 0 Z 0 ) + X at √s = 40 TeV are compared for Higgs-boson masses of 5m/sub w/ and 7m/sub w/. The Higgs-boson-decay signal should be visible in the energy and p/sub T/ distributions of the vector bosons. In Part II, the radiative corrections to both the decay rate for π 0 → γe + e - and the differential spectrum in the invariant mass of the Dalitz pain for experiments with limited geometrical acceptance are calculated. In Part III, the author introduces a generalized model for random paths (in arbitrary dimension) which smoothly interpolates between the standard paths (fermionic or bosonic) and the self-avoiding paths. An efficient Monte Carlo algorithm to simulate the model is presented along with some preliminary results for the average length, intersection, overlap and mean square size of paths in three and four dimensions

  16. Satellite DNA: An Evolving Topic.

    Science.gov (United States)

    Garrido-Ramos, Manuel A

    2017-09-18

    Satellite DNA represents one of the most fascinating parts of the repetitive fraction of the eukaryotic genome. Since the discovery of highly repetitive tandem DNA in the 1960s, a lot of literature has extensively covered various topics related to the structure, organization, function, and evolution of such sequences. Today, with the advent of genomic tools, the study of satellite DNA has regained a great interest. Thus, Next-Generation Sequencing (NGS), together with high-throughput in silico analysis of the information contained in NGS reads, has revolutionized the analysis of the repetitive fraction of the eukaryotic genomes. The whole of the historical and current approaches to the topic gives us a broad view of the function and evolution of satellite DNA and its role in chromosomal evolution. Currently, we have extensive information on the molecular, chromosomal, biological, and population factors that affect the evolutionary fate of satellite DNA, knowledge that gives rise to a series of hypotheses that get on well with each other about the origin, spreading, and evolution of satellite DNA. In this paper, I review these hypotheses from a methodological, conceptual, and historical perspective and frame them in the context of chromosomal organization and evolution.

  17. SOFTWARE FOR SUPERCOMPUTER SKIF “ProLit-lC” and “ProNRS-lC” FOR FOUNDRY AND METALLURGICAL PRODUCTIONS

    Directory of Open Access Journals (Sweden)

    A. N. Chichko

    2008-01-01

    Full Text Available The data of modeling on supercomputer system SKIF of technological process of  molds filling by means of computer system 'ProLIT-lc', and also data of modeling of the steel pouring process by means ofTroNRS-lc'are presented. The influence of number of  processors of  multinuclear computer system SKIF on acceleration and time of  modeling of technological processes, connected with production of castings and slugs, is shown.

  18. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  19. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    Science.gov (United States)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  20. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-03-29

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  1. The ASCI Network for SC '99: A Step on the Path to a 100 Gigabit Per Second Supercomputing Network

    Energy Technology Data Exchange (ETDEWEB)

    PRATT,THOMAS J.; TARMAN,THOMAS D.; MARTINEZ,LUIS M.; MILLER,MARC M.; ADAMS,ROGER L.; CHEN,HELEN Y.; BRANDT,JAMES M.; WYCKOFF,PETER S.

    2000-07-24

    This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within the community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.

  2. Combining density functional theory calculations, supercomputing, and data-driven methods to design new materials (Conference Presentation)

    Science.gov (United States)

    Jain, Anubhav

    2017-04-01

    Density functional theory (DFT) simulations solve for the electronic structure of materials starting from the Schrödinger equation. Many case studies have now demonstrated that researchers can often use DFT to design new compounds in the computer (e.g., for batteries, catalysts, and hydrogen storage) before synthesis and characterization in the lab. In this talk, I will focus on how DFT calculations can be executed on large supercomputing resources in order to generate very large data sets on new materials for functional applications. First, I will briefly describe the Materials Project, an effort at LBNL that has virtually characterized over 60,000 materials using DFT and has shared the results with over 17,000 registered users. Next, I will talk about how such data can help discover new materials, describing how preliminary computational screening led to the identification and confirmation of a new family of bulk AMX2 thermoelectric compounds with measured zT reaching 0.8. I will outline future plans for how such data-driven methods can be used to better understand the factors that control thermoelectric behavior, e.g., for the rational design of electronic band structures, in ways that are different from conventional approaches.

  3. Getting To Exascale: Applying Novel Parallel Programming Models To Lab Applications For The Next Generation Of Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Dube, Evi [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shereda, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nau, Lee [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Harris, Lance [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2010-09-27

    As supercomputing moves toward exascale, node architectures will change significantly. CPU core counts on nodes will increase by an order of magnitude or more. Heterogeneous architectures will become more commonplace, with GPUs or FPGAs providing additional computational power. Novel programming models may make better use of on-node parallelism in these new architectures than do current models. In this paper we examine several of these novel models – UPC, CUDA, and OpenCL –to determine their suitability to LLNL scientific application codes. Our study consisted of several phases: We conducted interviews with code teams and selected two codes to port; We learned how to program in the new models and ported the codes; We debugged and tuned the ported applications; We measured results, and documented our findings. We conclude that UPC is a challenge for porting code, Berkeley UPC is not very robust, and UPC is not suitable as a general alternative to OpenMP for a number of reasons. CUDA is well supported and robust but is a proprietary NVIDIA standard, while OpenCL is an open standard. Both are well suited to a specific set of application problems that can be run on GPUs, but some problems are not suited to GPUs. Further study of the landscape of novel models is recommended.

  4. Oral Versus Topical Diclofenac Sodium in the Treatment of Osteoarthritis.

    Science.gov (United States)

    Tieppo Francio, Vinicius; Davani, Saeid; Towery, Chris; Brown, Tony L

    2017-06-01

    Osteoarthritis (OA) is one of the most common causes of joint pain in the United States and non-steroidal anti-inflammatories (NSAIDs), such as Diclofenac sodium, which is currently available in two main routes of administration; oral and topical distribution have been established as one of the standard treatments for OA. Generally, oral NSAIDs are well tolerated; however our narrative review suggests that the topical solution had a better tolerability property than oral Diclofenac sodium, especially due to side effects of gastrointestinal bleeding with the utilization of the oral format. In addition, the topical route may be considered a reasonable selection by clinicians for management of musculoskeletal pain in those patients with a history of potential risk and adverse side effects. Most studies reviewed comparing oral versus topical solution of Diclofenac sodium revealed comparable efficacy, with minimal side effects utilizing the topical route. The key point of this narrative review is to help clinicians that currently must decide between very inexpensive diclofenac oral presentations and expensive topical presentations especially in the elderly population and the pros and cons of such decision-making process.

  5. Analyzing research trends on drug safety using topic modeling.

    Science.gov (United States)

    Zou, Chen

    2018-04-06

    Published drug safety data has evolved in the past decade due to scientific and technological advances in the relevant research fields. Considering that a vast amount of scientific literature has been published in this area, it is not easy to identify the key information. Topic modeling has emerged as a powerful tool to extract meaningful information from a large volume of unstructured texts. Areas covered: We analyzed the titles and abstracts of 4347 articles in four journals dedicated to drug safety from 2007 to 2016. We applied Latent Dirichlet allocation (LDA) model to extract 50 main topics, and conducted trend analysis to explore the temporal popularity of these topics over years. Expert Opinion/Commentary: We found that 'benefit-risk assessment and communication', 'diabetes' and 'biologic therapy for autoimmune diseases' are the top 3 most published topics. The topics relevant to the use of electronic health records/observational data for safety surveillance are becoming increasingly popular over time. Meanwhile, there is a slight decrease in research on signal detection based on spontaneous reporting, although spontaneous reporting still plays an important role in benefit-risk assessment. The topics related to medical conditions and treatment showed highly dynamic patterns over time.

  6. Topics in Banach space theory

    CERN Document Server

    Albiac, Fernando

    2016-01-01

    This text provides the reader with the necessary technical tools and background to reach the frontiers of research without the introduction of too many extraneous concepts. Detailed and accessible proofs are included, as are a variety of exercises and problems. The two new chapters in this second edition are devoted to two topics of much current interest amongst functional analysts: Greedy approximation with respect to bases in Banach spaces and nonlinear geometry of Banach spaces. This new material is intended to present these two directions of research for their intrinsic importance within Banach space theory, and to motivate graduate students interested in learning more about them. This textbook assumes only a basic knowledge of functional analysis, giving the reader a self-contained overview of the ideas and techniques in the development of modern Banach space theory. Special emphasis is placed on the study of the classical Lebesgue spaces Lp (and their sequence space analogues) and spaces of continuous f...

  7. Synergetics introduction and advanced topics

    CERN Document Server

    Haken, Hermann

    2004-01-01

    This book is an often-requested reprint of two classic texts by H. Haken: "Synergetics. An Introduction" and "Advanced Synergetics". Synergetics, an interdisciplinary research program initiated by H. Haken in 1969, deals with the systematic and methodological approach to the rapidly growing field of complexity. Going well beyond qualitative analogies between complex systems in fields as diverse as physics, chemistry, biology, sociology and economics, Synergetics uses tools from theoretical physics and mathematics to construct an unifying framework within which quantitative descriptions of complex, self-organizing systems can be made. This may well explain the timelessness of H. Haken's original texts on this topic, which are now recognized as landmarks in the field of complex systems. They provide both the beginning graduate student and the seasoned researcher with solid knowledge of the basic concepts and mathematical tools. Moreover, they admirably convey the spirit of the pioneering work by the founder of ...

  8. Topics in b-physics

    International Nuclear Information System (INIS)

    Bjorken, J.D.

    1988-09-01

    We discuss a few issues in the burgeoning field of physics of hadrons containing the b-quark. These include: A simple parameterization of the Kobayashi-Maskawa matrix featuring a triangle in the complex plane, a review of B/sub s/ and B/sub d/ mixing with special attention given to width-mixing and the CP-violating same-sign dilepton asymmetry, a discussion of the CP-violating decay B/sub d/ → /psi/π + π/sup /minus//, and a discussion of Cp-violating rate asymmetries in the two-body decays Λ/sub b/ → pπ/sup /minus// and Λ/sub b/ → pK/sup /minus//. The concluding discussion concerns generalizations beyond these specific topics. 22 refs., 6 figs

  9. Conclusion from the fifth topic

    International Nuclear Information System (INIS)

    Gin, St.; Advocat, Th.

    1997-01-01

    The topic ''mechanics and alteration kinetics of glasses'' is a crucial point for the understanding of the long-term behaviour of nuclear glasses. Kinetic models used in simulation are based on the works made by Grambow who imputes the control of the alteration kinetics of borosilicate glasses to the desorption of the ortho-silica acid produced at the reactive interface. The ensuing kinetics law requires the existence of an equilibrium of the silica at the interface glass/gel and the existence of a linear concentration gradient dissolved in the interstitial gel solution. The role of the gel needs further studies to be well understood. The difficulty lies in the fact that the composition and the structure of the gel varies with time, space (anisotropy) and with the conditions of alteration (temperature, pH, flowrate...). (A.C.)

  10. Antibiotic Resistance: MedlinePlus Health Topic

    Science.gov (United States)

    ... GO GO About MedlinePlus Site Map FAQs Customer Support Health Topics Drugs & Supplements Videos & Tools Español You Are Here: Home → Health Topics → Antibiotic Resistance URL of this page: https://medlineplus.gov/antibioticresistance. ...

  11. Topical Antibacterials and Global Challenges on Resistance ...

    African Journals Online (AJOL)

    skin infections can be easily treated with topical antibacterial medication that is available over the counter or by ... infection in minor cut or burn, eyes and ear infection [5]. .... Sensitive/dry skin ... includes both oral and topical antibiotics, but.

  12. Topic prominence in Chinese EFL learners’ interlanguage

    Directory of Open Access Journals (Sweden)

    Shaopeng Li

    2014-01-01

    Full Text Available The present study aims to investigate the general characteristics of topicprominent typological interlanguage development of Chinese learners of English in terms of acquiring subject-prominent English structures from a discourse perspective. Topic structures mainly appear in Chinese discourse in the form of topic chains (Wang, 2002; 2004. The research target are the topic chain, which is the main topic-prominent structure in Chinese discourse, and zero anaphora, which is the most common topic anaphora in the topic chain. Two important findings emerged from the present study. First, the characteristics of Chinese topic chains are transferrable to the interlanguage of Chinese EFL learners, thus resulting in overgeneralization of the zero anaphora. Second, the interlanguage discourse of Chinese EFL learners reflects a change of the second language acquisition process from topic-prominence to subject-prominence, thus lending support to the discourse transfer hypothesis.

  13. Eosinophilic Esophagitis: MedlinePlus Health Topic

    Science.gov (United States)

    ... Esophagitis (EoE) (American Academy of Allergy, Asthma, and Immunology) Also in Spanish Latest News Eosinophilic Esophagitis May ... Pediatric and Adolescent Patients (American College of Gastroenterology) Topic Image Related Health Topics Eosinophilic Disorders Esophagus Disorders ...

  14. Female Infertility: MedlinePlus Health Topic

    Science.gov (United States)

    ... Prolactin blood test (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Female Infertility updates ... Serum progesterone Show More Show Less Related Health Topics Assisted Reproductive Technology Infertility Male Infertility National Institutes ...

  15. Mobility Aids: MedlinePlus Health Topic

    Science.gov (United States)

    ... Mobility Problems (AGS Foundation for Health in Aging) Topic Image MedlinePlus Email Updates Get Mobility Aids updates ... standing and walking Using a cane Related Health Topics Assistive Devices Other Languages Find health information in ...

  16. Genetic Testing: MedlinePlus Health Topic

    Science.gov (United States)

    ... Your Family's Health (National Institutes of Health) - PDF Topic Image MedlinePlus Email Updates Get Genetic Testing updates ... testing and your cancer risk Karyotyping Related Health Topics Birth Defects Genetic Counseling Genetic Disorders Newborn Screening ...

  17. Folic Acid: MedlinePlus Health Topic

    Science.gov (United States)

    ... acid in diet (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Folic Acid updates ... acid - test Folic acid in diet Related Health Topics Vitamins National Institutes of Health The primary NIH ...

  18. Pneumococcal Infections: MedlinePlus Health Topic

    Science.gov (United States)

    ... Prevention, Immunization Action Coalition) - PDF Also in Spanish Topic Image MedlinePlus Email Updates Get Pneumococcal Infections updates ... ray Meningitis - pneumococcal Sputum gram stain Related Health Topics Meningitis Pneumonia Sepsis Sinusitis Streptococcal Infections National Institutes ...

  19. Chiropractic: MedlinePlus Health Topic

    Science.gov (United States)

    ... for back pain (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Chiropractic updates by ... ENCYCLOPEDIA Chiropractic care for back pain Related Health Topics Back Pain Complementary and Integrative Medicine National Institutes ...

  20. Wilms' Tumor: MedlinePlus Health Topic

    Science.gov (United States)

    ... Spanish Wilms tumor (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Wilms Tumor updates ... ENCYCLOPEDIA After chemotherapy - discharge Wilms tumor Related Health Topics Kidney Cancer National Institutes of Health The primary ...

  1. Child Safety: MedlinePlus Health Topic

    Science.gov (United States)

    ... injuries in children (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Child Safety updates ... safety Preventing head injuries in children Related Health Topics Infant and Newborn Care Internet Safety Motor Vehicle ...

  2. Diets: MedlinePlus Health Topic

    Science.gov (United States)

    ... Spanish Mediterranean diet (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Diets updates by ... foods Diet-busting foods Mediterranean diet Related Health Topics Child Nutrition DASH Eating Plan Diabetic Diet Nutrition ...

  3. Colonoscopy: MedlinePlus Health Topic

    Science.gov (United States)

    ... Spanish Virtual colonoscopy (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Colonoscopy updates by ... Colonoscopy Colonoscopy discharge Sigmoidoscopy Virtual colonoscopy Related Health Topics Colonic Diseases Colonic Polyps Colorectal Cancer National Institutes ...

  4. Pneumocystis Infections: MedlinePlus Health Topic

    Science.gov (United States)

    ... Pneumocystis jiroveci pneumonia (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Pneumocystis Infections updates ... GO MEDICAL ENCYCLOPEDIA Pneumocystis jiroveci pneumonia Related Health Topics HIV/AIDS HIV/AIDS and Infections Pneumonia National ...

  5. Collapsed Lung: MedlinePlus Health Topic

    Science.gov (United States)

    ... Spanish Pneumothorax - infants (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Collapsed Lung updates ... Lung surgery Pneumothorax - slideshow Pneumothorax - infants Related Health Topics Chest Injuries and Disorders Lung Diseases Pleural Disorders ...

  6. Male Infertility: MedlinePlus Health Topic

    Science.gov (United States)

    ... Spanish Testicular biopsy (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Male Infertility updates ... analysis Sperm release pathway Testicular biopsy Related Health Topics Assisted Reproductive Technology Female Infertility Infertility National Institutes ...

  7. Prediabetes:MedlinePlus Health Topic

    Science.gov (United States)

    ... in Spanish Prediabetes (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Prediabetes updates by ... Glucose tolerance test - non-pregnant Prediabetes Related Health Topics A1C Diabetes Diabetes in Children and Teens Diabetes ...

  8. Healthy Aging: MedlinePlus Health Topic

    Science.gov (United States)

    ... Aging National Institute on Aging Also in Spanish Topic Image MedlinePlus Email Updates Get Healthy Aging updates ... 65 Health screening - women - over 65 Related Health Topics Exercise for Seniors Nutrition for Seniors Seniors' Health ...

  9. Psoriatic Arthritis: MedlinePlus Health Topic

    Science.gov (United States)

    ... Handouts Psoriatic arthritis (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Psoriatic Arthritis updates ... this? GO MEDICAL ENCYCLOPEDIA Psoriatic arthritis Related Health Topics Arthritis Psoriasis National Institutes of Health The primary ...

  10. Hip Replacement: MedlinePlus Health Topic

    Science.gov (United States)

    ... invasive hip replacement (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Hip Replacement updates ... replacement - precautions Minimally invasive hip replacement Related Health Topics Hip Injuries and Disorders National Institutes of Health ...

  11. Diabetes: MedlinePlus Health Topic

    Science.gov (United States)

    ... High blood sugar (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Diabetes updates by ... ketones test Show More Show Less Related Health Topics A1C Blood Sugar Diabetes and Pregnancy Diabetes Complications ...

  12. Platelet Disorders: MedlinePlus Health Topic

    Science.gov (United States)

    ... Thromobocytopenia - drug-induced (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Platelet Disorders updates ... Willebrand disease Show More Show Less Related Health Topics Bleeding Disorders Blood Clots Blood Count Tests Blood ...

  13. Cardiac Rehabilitation: MedlinePlus Health Topic

    Science.gov (United States)

    ... in Spanish Electrocardiogram (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Cardiac Rehabilitation updates ... How to take your pulse Pulse Related Health Topics Heart Attack Heart Diseases How to Prevent Heart ...

  14. Dialysis: MedlinePlus Health Topic

    Science.gov (United States)

    ... access for hemodialysis (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Dialysis updates by ... for hemodialysis Show More Show Less Related Health Topics Creatinine Kidney Cysts Kidney Failure Peritoneal Disorders National ...

  15. Eye Wear: MedlinePlus Health Topic

    Science.gov (United States)

    ... When You Exercise (National Institute on Aging) - PDF Topic Image MedlinePlus Email Updates Get Eye Wear updates by email What's this? GO Related Health Topics Refractive Errors National Institutes of Health The primary ...

  16. Cardiac Arrest: MedlinePlus Health Topic

    Science.gov (United States)

    ... Handouts Cardiac arrest (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Cardiac Arrest updates ... this? GO MEDICAL ENCYCLOPEDIA Cardiac arrest Related Health Topics Arrhythmia CPR Pacemakers and Implantable Defibrillators National Institutes ...

  17. Kawasaki Disease: MedlinePlus Health Topic

    Science.gov (United States)

    ... Spanish Kawasaki disease (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Kawasaki Disease updates ... GO MEDICAL ENCYCLOPEDIA Electrocardiogram Kawasaki disease Related Health Topics Vasculitis National Institutes of Health The primary NIH ...

  18. Diabetic Diet: MedlinePlus Health Topic

    Science.gov (United States)

    ... Sweeteners - sugar substitutes (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Diabetic Diet updates ... you have diabetes Sweeteners - sugar substitutes Related Health Topics Blood Sugar Diabetes Diabetes in Children and Teens ...

  19. Infection Control: MedlinePlus Health Topic

    Science.gov (United States)

    ... Staph infections - hospital (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Infection Control updates ... infections when visiting Staph infections - hospital Related Health Topics Hepatitis HIV/AIDS MRSA National Institutes of Health ...

  20. Menopause: MedlinePlus Health Topic

    Science.gov (United States)

    ... Spanish What is Menopause? (National Institute on Aging) Topic Image MedlinePlus Email Updates Get Menopause updates by ... test Menopause Types of hormone therapy Related Health Topics Hormone Replacement Therapy Menstruation Premature Ovarian Failure National ...

  1. Vaginitis: MedlinePlus Health Topic

    Science.gov (United States)

    ... Spanish Vulvovaginitis - overview (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Vaginitis updates by ... Vaginitis test - wet mount Vulvovaginitis - overview Related Health Topics Trichomoniasis Vaginal Diseases Yeast Infections Other Languages Find ...

  2. Hearing Aids: MedlinePlus Health Topic

    Science.gov (United States)

    ... for hearing loss (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Hearing Aids updates ... MEDICAL ENCYCLOPEDIA Devices for hearing loss Related Health Topics Cochlear Implants Hearing Disorders and Deafness National Institutes ...

  3. Kidney Tests: MedlinePlus Health Topic

    Science.gov (United States)

    ... Spanish Total protein (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Kidney Tests updates ... hour volume Show More Show Less Related Health Topics Kidney Cancer Kidney Diseases National Institutes of Health ...

  4. Ischemic Stroke: MedlinePlus Health Topic

    Science.gov (United States)

    ... Spanish Thrombolytic therapy (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Ischemic Stroke updates ... cardiogenic embolism Stroke - slideshow Thrombolytic therapy Related Health Topics Hemorrhagic Stroke Stroke Stroke Rehabilitation National Institutes of ...

  5. Pulmonary Rehabilitation: MedlinePlus Health Topic

    Science.gov (United States)

    ... Handouts Postural drainage (Medical Encyclopedia) Also in Spanish Topic Image MedlinePlus Email Updates Get Pulmonary Rehabilitation updates ... this? GO MEDICAL ENCYCLOPEDIA Postural drainage Related Health Topics Lung Diseases National Institutes of Health The primary ...

  6. Human-competitive automatic topic indexing

    CERN Document Server

    Medelyan, Olena

    2009-01-01

    Topic indexing is the task of identifying the main topics covered by a document. These are useful for many purposes: as subject headings in libraries, as keywords in academic publications and as tags on the web. Knowing a document’s topics helps people judge its relevance quickly. However, assigning topics manually is labor intensive. This thesis shows how to generate them automatically in a way that competes with human performance. Three kinds of indexing are investigated: term assignment, a task commonly performed by librarians, who select topics from a controlled vocabulary; tagging, a popular activity of web users, who choose topics freely; and a new method of keyphrase extraction, where topics are equated to Wikipedia article names. A general two-stage algorithm is introduced that first selects candidate topics and then ranks them by significance based on their properties. These properties draw on statistical, semantic, domain-specific and encyclopedic knowledge. They are combined using a machine learn...

  7. Topical report review status: Volume 10

    International Nuclear Information System (INIS)

    1996-03-01

    This report provides industry with procedures for submitting topical reports, guidance on how the U.S. Nuclear Regulatory Commission (NRC) processes and responds to topical report submittals, and an accounting, with review schedules, of all topical reports currently accepted for review by the NRC. This report is published annually

  8. Topic modelling in the information warfare domain

    CSIR Research Space (South Africa)

    De Waal, A

    2013-11-01

    Full Text Available for interesting and relevant topics. The objectives of this paper is to describe topic modelling, put it in context as a useful IW technique and illustrate its use with two examples. They discuss several applications of topic modelling in the safety and security...

  9. Optical systems for synchrotron radiation. Lecture 1. Introductory topics. Revision

    International Nuclear Information System (INIS)

    Howells, M.R.

    1986-02-01

    Various fundamental topics are considered which underlie the design and use of optical systems for synchrotron radiation. The point of view of linear system theory is chosen which acts as a unifying concept throughout the series. In this context the important optical quantities usually appear as either impulse response functions (Green's functions) or frequency transfer functions (Fourier Transforms of the Green's functions). Topics include the damped harmonic oscillator, free-space optical field propagation, optical properties of materials, dispersion, and the Kramers-Kronig relations

  10. Topics in particle physics phenomenology

    International Nuclear Information System (INIS)

    Pantaleone, J.T.

    1985-01-01

    This thesis consists of topics in field theory. In part A: (Chapter 1) A short review of heavy-quark physics, (Chapter 2) Spin-dependent forces in heavy-quark systems, (Chapter 3) Bound state effects in the Upsilon → γ + resonance, and in part B, The compatibility of free fractional charge and Dirac magnetic monopoles. In Chapter 2, using the results of the fourth-order quark-antiquark interactions in perturbative QCD, we show that the spin-dependent potentials in the formalism of Eichten and Feinberg and Gromes have to be generalized to include the quark mass dependence. The recently observed hyperfine and fine structure splittings in the J/psi and Upsilon systems are found to agree with the purely perturbative QCD results for the scale parameter Λ/sub MS/ = 0.30 +/- 0.06 GeV. With this value for Λ/sub MS/ we give some predictions on the T and toponium spectroscopies. In Chapter 3 we study the effect of b anti b bound state dynamics on the reaction Upsilon → γ + resonance. We argue from the results that the recently discovered sigma (8320) must have a scalar, rather than a pseudoscalar, coupling to the b quark

  11. Decision Point 1 Topical Report

    Energy Technology Data Exchange (ETDEWEB)

    Yablonsky, Al; Barsoumian, Shant; Legere, David

    2013-05-01

    This Topical Report addresses accomplishments achieved during Phase 2a of the SkyMine® Carbon Mineralization Pilot Project. The primary objectives of this project are to design, construct, and operate a system to capture CO2 from a slipstream of flue gas from a commercial coal-fired cement kiln, convert that CO2 to products having commercial value (i.e., beneficial use), show the economic viability of the CO2 capture and conversion process, and thereby advance the technology to the point of readiness for commercial scale demonstration and proliferation. The overall process is carbon negative, resulting in mineralization of CO2 that would otherwise be released into the atmosphere. The project will also substantiate market opportunities for the technology by sales of chemicals into existing markets, and identify opportunities to improve technology performance and reduce costs at the commercial scale. The project is being conducted in two phases. The primary objectives of Phase 1 were to elaborate proven SkyMine® process chemistry to commercial pilot-scale operation and complete the preliminary design for the pilot plant to be built and operated in Phase 2, complete a NEPA evaluation, and develop a comprehensive carbon life cycle analysis. The objective of the current Phase (2a) is to complete the detailed design of the pilot plant to be built in Phase 2b.

  12. Main technical topics in 1999

    International Nuclear Information System (INIS)

    2000-01-01

    This Safety Authority annual report strives to present current organizational provisions and future trends in nuclear safety supervision in France and to describe the most outstanding occurrences during the past year. A first part presents nine documents concerning the main topics of 1999: aging of nuclear installations, the Offsite Emergency Plans (PPI), the impact of nuclear activities on man and the environment, the criticality hazards, EDF in 1999, the EPR project, the Andra in 1999, the transport incidents, the nuclear safety in eastern Europe. The second part presents the missions and actions of the Nuclear Installations Safety in the domains of the liabilities, the organization of the nuclear safety control, the regulations of the INB, the public information, the international relations, the crisis management, the radioactive materials transportation, the radioactive wastes. The equipment, the radiation protection and the exploitation of the pressurized water reactors are also treated just as the experimental reactors, the fuel cycle installations and the the nuclear installations dismantling. (A.L.B.)

  13. Topics in Number Theory Conference

    CERN Document Server

    Andrews, George; Ono, Ken

    1999-01-01

    From July 31 through August 3,1997, the Pennsylvania State University hosted the Topics in Number Theory Conference. The conference was organized by Ken Ono and myself. By writing the preface, I am afforded the opportunity to express my gratitude to Ken for beng the inspiring and driving force behind the whole conference. Without his energy, enthusiasm and skill the entire event would never have occurred. We are extremely grateful to the sponsors of the conference: The National Sci­ ence Foundation, The Penn State Conference Center and the Penn State Depart­ ment of Mathematics. The object in this conference was to provide a variety of presentations giving a current picture of recent, significant work in number theory. There were eight plenary lectures: H. Darmon (McGill University), "Non-vanishing of L-functions and their derivatives modulo p. " A. Granville (University of Georgia), "Mean values of multiplicative functions. " C. Pomerance (University of Georgia), "Recent results in primality testing. " C. ...

  14. The Virtual Robotics Laboratory; TOPICAL

    International Nuclear Information System (INIS)

    Kress, R.L.; Love, L.J.

    1999-01-01

    The growth of the Internet has provided a unique opportunity to expand research collaborations between industry, universities, and the national laboratories. The Virtual Robotics Laboratory (VRL) is an innovative program at Oak Ridge National Laboratory (ORNL) that is focusing on the issues related to collaborative research through controlled access of laboratory equipment using the World Wide Web. The VRL will provide different levels of access to selected ORNL laboratory secondary education programs. In the past, the ORNL Robotics and Process Systems Division has developed state-of-the-art robotic systems for the Army, NASA, Department of Energy, Department of Defense, as well as many other clients. After proof of concept, many of these systems sit dormant in the laboratories. This is not out of completion of all possible research topics. but from completion of contracts and generation of new programs. In the past, a number of visiting professors have used this equipment for their own research. However, this requires that the professor, and possibly his/her students, spend extended periods at the laboratory facility. In addition, only a very exclusive group of faculty can gain access to the laboratory and hardware. The VRL is a tool that enables extended collaborative efforts without regard to geographic limitations

  15. Topical subjects of nuclear energy

    International Nuclear Information System (INIS)

    1977-12-01

    The controversy as regards the introduction of nuclear energy to the energy supply of the Federal Republic of Germany has not subdued yet. However, the discussion has shifted from technical questions more to the field of political argumentation. In addition, questions concerning the back end cycle have come to the fore. The report at hand deals with the topical subjects of fuel reprocessing, ultimate storage of radioactive wastes, the impact of power plants in general and nuclear power plants in particular on the climate, safety and safeguard questions concerning nuclear facilities and fissionable materials, and with the properties and possibilities of plutonium. The authors tried to present technical know-how in an easy comprehensible way. Literature references enable the checking of facts and provide the possibility to deal in more detail with the matter. The seminar report is to give all those interested the opportunity to acquaint themselves with facts and know-how and to acquire knowledge on which to base a personal opinion. (orig.) [de

  16. The Physics of SERAPHIM; TOPICAL

    International Nuclear Information System (INIS)

    MARDER, BARRY M.

    2001-01-01

    The Segmented Rail Phased Induction Motor (SERAPHIM) has been proposed as a propulsion method for urban maglev transit, advanced monorail, and other forms of high speed ground transportation. In this report we describe the technology, consider different designs, and examine its strengths and weaknesses

  17. Topics in combinatorial pattern matching

    DEFF Research Database (Denmark)

    Vildhøj, Hjalte Wedel

    Problem. Given m documents of total length n, we consider the problem of finding a longest string common to at least d ≥ 2 of the documents. This problem is known as the longest common substring (LCS) problem and has a classic O(n) space and O(n) time solution (Weiner [FOCS’73], Hui [CPM’92]). However...

  18. Identifying Topics in Microblogs Using Wikipedia.

    Directory of Open Access Journals (Sweden)

    Ahmet Yıldırım

    Full Text Available Twitter is an extremely high volume platform for user generated contributions regarding any topic. The wealth of content created at real-time in massive quantities calls for automated approaches to identify the topics of the contributions. Such topics can be utilized in numerous ways, such as public opinion mining, marketing, entertainment, and disaster management. Towards this end, approaches to relate single or partial posts to knowledge base items have been proposed. However, in microblogging systems like Twitter, topics emerge from the culmination of a large number of contributions. Therefore, identifying topics based on collections of posts, where individual posts contribute to some aspect of the greater topic is necessary. Models, such as Latent Dirichlet Allocation (LDA, propose algorithms for relating collections of posts to sets of keywords that represent underlying topics. In these approaches, figuring out what the specific topic(s the keyword sets represent remains as a separate task. Another issue in topic detection is the scope, which is often limited to specific domain, such as health. This work proposes an approach for identifying domain-independent specific topics related to sets of posts. In this approach, individual posts are processed and then aggregated to identify key tokens, which are then mapped to specific topics. Wikipedia article titles are selected to represent topics, since they are up to date, user-generated, sophisticated articles that span topics of human interest. This paper describes the proposed approach, a prototype implementation, and a case study based on data gathered during the heavily contributed periods corresponding to the four US election debates in 2012. The manually evaluated results (0.96 precision and other observations from the study are discussed in detail.

  19. Identifying Topics in Microblogs Using Wikipedia.

    Science.gov (United States)

    Yıldırım, Ahmet; Üsküdarlı, Suzan; Özgür, Arzucan

    2016-01-01

    Twitter is an extremely high volume platform for user generated contributions regarding any topic. The wealth of content created at real-time in massive quantities calls for automated approaches to identify the topics of the contributions. Such topics can be utilized in numerous ways, such as public opinion mining, marketing, entertainment, and disaster management. Towards this end, approaches to relate single or partial posts to knowledge base items have been proposed. However, in microblogging systems like Twitter, topics emerge from the culmination of a large number of contributions. Therefore, identifying topics based on collections of posts, where individual posts contribute to some aspect of the greater topic is necessary. Models, such as Latent Dirichlet Allocation (LDA), propose algorithms for relating collections of posts to sets of keywords that represent underlying topics. In these approaches, figuring out what the specific topic(s) the keyword sets represent remains as a separate task. Another issue in topic detection is the scope, which is often limited to specific domain, such as health. This work proposes an approach for identifying domain-independent specific topics related to sets of posts. In this approach, individual posts are processed and then aggregated to identify key tokens, which are then mapped to specific topics. Wikipedia article titles are selected to represent topics, since they are up to date, user-generated, sophisticated articles that span topics of human interest. This paper describes the proposed approach, a prototype implementation, and a case study based on data gathered during the heavily contributed periods corresponding to the four US election debates in 2012. The manually evaluated results (0.96 precision) and other observations from the study are discussed in detail.

  20. New generation of docking programs: Supercomputer validation of force fields and quantum-chemical methods for docking.

    Science.gov (United States)

    Sulimov, Alexey V; Kutov, Danil C; Katkova, Ekaterina V; Ilin, Ivan S; Sulimov, Vladimir B

    2017-11-01

    Discovery of new inhibitors of the protein associated with a given disease is the initial and most important stage of the whole process of the rational development of new pharmaceutical substances. New inhibitors block the active site of the target protein and the disease is cured. Computer-aided molecular modeling can considerably increase effectiveness of new inhibitors development. Reliable predictions of the target protein inhibition by a small molecule, ligand, is defined by the accuracy of docking programs. Such programs position a ligand in the target protein and estimate the protein-ligand binding energy. Positioning accuracy of modern docking programs is satisfactory. However, the accuracy of binding energy calculations is too low to predict good inhibitors. For effective application of docking programs to new inhibitors development the accuracy of binding energy calculations should be higher than 1kcal/mol. Reasons of limited accuracy of modern docking programs are discussed. One of the most important aspects limiting this accuracy is imperfection of protein-ligand energy calculations. Results of supercomputer validation of several force fields and quantum-chemical methods for docking are presented. The validation was performed by quasi-docking as follows. First, the low energy minima spectra of 16 protein-ligand complexes were found by exhaustive minima search in the MMFF94 force field. Second, energies of the lowest 8192 minima are recalculated with CHARMM force field and PM6-D3H4X and PM7 quantum-chemical methods for each complex. The analysis of minima energies reveals the docking positioning accuracies of the PM7 and PM6-D3H4X quantum-chemical methods and the CHARMM force field are close to one another and they are better than the positioning accuracy of the MMFF94 force field. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. 75 FR 26647 - Ophthalmic and Topical Dosage Form New Animal Drugs; Ivermectin Topical Solution

    Science.gov (United States)

    2010-05-12

    .... FDA-2010-N-0002] Ophthalmic and Topical Dosage Form New Animal Drugs; Ivermectin Topical Solution... are treated with a topical solution of ivermectin. DATES: This rule is effective May 12, 2010. FOR... ANADA 200-340 for PRIVERMECTIN (ivermectin), a topical solution used on cattle to control infestations...

  2. Topical tags vs non-topical tags : Towards a bipartite classification?

    NARCIS (Netherlands)

    Basile, Valerio; Peroni, Silvio; Tamburini, Fabio; Vitali, Fabio

    2015-01-01

    In this paper we investigate whether it is possible to create a computational approach that allows us to distinguish topical tags (i.e. talking about the topic of a resource) and non-topical tags (i.e. describing aspects of a resource that are not related to its topic) in folksonomies, in a way that

  3. Topical anti-infective sinonasal irrigations: update and literature review.

    Science.gov (United States)

    Lee, Jivianne T; Chiu, Alexander G

    2014-01-01

    Sinonasal anti-infective irrigations have emerged as a promising therapeutic modality in the comprehensive management of chronic rhinosinusitis (CRS), particularly in the context of recalcitrant disease. The purpose of this article was to delineate the current spectrum of topical anti-infective therapies available and evaluate their role in the treatment of CRS. A systematic literature review was performed on all studies investigating the use of topical antimicrobial solutions in the medical therapy of CRS. Anti-infective irrigations were stratified into topical antibacterial, antifungal, and additive preparations according to their composition and respective microbicidal properties. The use of topical antibiotic irrigations has been supported by low-level studies in the treatment of refractory CRS, with optimal results achieved in patients who have undergone prior functional endoscopic sinus surgery and received culture-directed therapy. Multiple evidence-based reviews have not established any clinical benefit with the administration of topical antifungals, and their use is not currently recommended in the management of routine CRS. Topical additives including surfactants may be beneficial as adjunctive treatment for recalcitrant CRS, but additional research is needed to investigate their efficacy in comparison with other agents and establish safety profiles. Topical anti-infective solutions are not recommended as first-line therapy for routine CRS but may be considered as a potential option for patients with refractory CRS who have failed traditional medical and surgical intervention. Additional research is necessary to determine which patient populations would derive the most benefit from each respective irrigation regimen and identify potential toxicities associated with prolonged use.

  4. Film forming systems for topical and transdermal drug delivery

    Directory of Open Access Journals (Sweden)

    Kashmira Kathe

    2017-11-01

    Full Text Available Skin is considered as an important route of administration of drugs for both local and systemic effects. The effectiveness of topical therapy depends on the physicochemical properties of the drug and adherence of the patient to the treatment regimen as well as the system's ability to adhere to skin during the therapy so as to promote drug penetration through the skin barrier. Conventional formulations for topical and dermatological administration of drugs have certain limitations like poor adherence to skin, poor permeability and compromised patient compliance. For the treatment of diseases of body tissues and wounds, the drug has to be maintained at the site of treatment for an effective period of time. Topical film forming systems are such developing drug delivery systems meant for topical application to the skin, which adhere to the body, forming a thin transparent film and provide delivery of the active ingredients to the body tissue. These are intended for skin application as emollient or protective and for local action or transdermal penetration of medicament for systemic action. The transparency is an appreciable feature of this polymeric system which greatly influences the patient acceptance. In the current discussion, the film forming systems are described as a promising choice for topical and transdermal drug delivery. Further the various types of film forming systems (sprays/solutions, gels and emulsions along with their evaluation parameters have also been reviewed.

  5. Recent Advances In Topical Therapy In Dermatology

    Directory of Open Access Journals (Sweden)

    Mohan Thappa Devinder

    2003-01-01

    Full Text Available With changing times various newer topical agents are introduced in the field of dermatology. Tacrolimus and pimecrolimus are immunisuppressants, which are effective topically and are tried in the management of atopic dermatitis as well as other disorders including allergic contact dermatitis, atrophic lichen planus, pyoderma gangrenosum. Imiquimod, an immune response modifier, is presently in use for genital warts but has potentials as anti- tumour agent and in various other dermatological conditions when used topically. Tazarotene is a newer addition to the list of topical reginoids, which is effective in psoriasis and has better effect in combination with calcipotriene, phototherapy and topical costicosteroids. Tazarotene and adapelene are also effective in inflammatory acne. Calcipotriol, a vitamin D analogue has been introduced as a topical agent in the treatment of psoriasis. Steroid components are also developed recently which will be devoid of the side effects but having adequate anti-inflammatory effect. Topical photodynamic therapy has also a wide range of use in dermatology. Newer topical agents including cidofovir, capsaicin, topical sensitizers, topical antifungal agents for onychomycosis are also of use in clinical practice. Other promising developments include skin substitutes and growth factors for wound care.

  6. Topics in Gravitation and Cosmology

    Science.gov (United States)

    Bahrami Taghanaki, Sina

    This thesis is focused on two topics in which relativistic gravitational fields play an important role, namely early Universe cosmology and black hole physics. The theory of cosmic inflation has emerged as the most successful theory of the very early Universe with concrete and verifiable predictions for the properties of anisotropies of the cosmic microwave background radiation and large scale structure. Coalescences of black hole binaries have recently been detected by the Laser Interferometer Gravitational Wave Observatory (LIGO), opening a new arena for observationally testing the dynamics of gravity. In part I of this thesis we explore some modifications to the standard theory of inflation. The main predictions of single field slow-roll inflation have been largely consistent with cosmological observations. However, there remain some aspects of the theory that are not presently well understood. Among these are the somewhat interrelated issues of the choice of initial state for perturbations and the potential imprints of pre-inflationary dynamics. It is well known that a key prediction of the standard theory of inflation, namely the Gaussianity of perturbations, is a consequence of choosing a natural vacuum initial state. In chapter 3, we study the generation and detectability of non-Gaussianities in inflationary scalar perturbations that originate from more general choices of initial state. After that, in chapter 4, we study a simple but predictive model of pre-inflationary dynamics in an attempt to test the robustness of inflationary predictions. We find that significant deviations from the standard predictions are unlikely to result from models in which the inflaton field decouples from the pre-inflationary degrees of freedom prior to freeze-out of the observable modes. In part II we turn to a study of an aspect of the thermodynamics of black holes, a subject which has led to important advances in our understanding of quantum gravity. For objects which

  7. Topics in elementary particle physics

    Science.gov (United States)

    Jin, Xiang

    The author of this thesis discusses two topics in elementary particle physics: n-ary algebras and their applications to M-theory (Part I), and functional evolution and Renormalization Group flows (Part II). In part I, Lie algebra is extended to four different n-ary algebraic structure: generalized Lie algebra, Filippov algebra, Nambu algebra and Nambu-Poisson tensor; though there are still many other n-ary algebras. A natural property of Generalized Lie algebras — the Bremner identity, is studied, and proved with a totally different method from its original version. We extend Bremner identity to n-bracket cases, where n is an arbitrary odd integer. Filippov algebras do not focus on associativity, and are defined by the Fundamental identity. We add associativity to Filippov algebras, and give examples of how to construct Filippov algebras from su(2), bosonic oscillator, Virasoro algebra. We try to include fermionic charges into the ternary Virasoro-Witt algebra, but the attempt fails because fermionic charges keep generating new charges that make the algebra not closed. We also study the Bremner identity restriction on Nambu algebras and Nambu-Poisson tensors. So far, the only example 3-algebra being used in physics is the BLG model with 3-algebra A4, describing two M2-branes interactions. Its extension with Nambu algebra, BLG-NB model, is believed to describe infinite M2-branes condensation. Also, there is another propose for M2-brane interactions, the ABJM model, which is constructed by ordinary Lie algebra. We compare the symmetry properties between them, and discuss the possible approaches to include these three models into a grand unification theory. In Part II, we give an approximate solution for Schroeder's equations, based on series and conjugation methods. We use the logistic map as an example, and demonstrate that this approximate solution converges to known analytical solutions around the fixed point, around which the approximate solution is constructed

  8. Unsupervised topic discovery by anomaly detection

    OpenAIRE

    Cheng, Leon

    2013-01-01

    Approved for public release; distribution is unlimited With the vast amount of information and public comment available online, it is of increasing interest to understand what is being said and what topics are trending online. Government agencies, for example, want to know what policies concern the public without having to look through thousands of comments manually. Topic detection provides automatic identification of topics in documents based on the information content and enhances many ...

  9. Tracking topic birth and death in LDA.

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Andrew T.; Robinson, David Gerald

    2011-09-01

    Most topic modeling algorithms that address the evolution of documents over time use the same number of topics at all times. This obscures the common occurrence in the data where new subjects arise and old ones diminish or disappear entirely. We propose an algorithm to model the birth and death of topics within an LDA-like framework. The user selects an initial number of topics, after which new topics are created and retired without further supervision. Our approach also accommodates many of the acceleration and parallelization schemes developed in recent years for standard LDA. In recent years, topic modeling algorithms such as latent semantic analysis (LSA)[17], latent Dirichlet allocation (LDA)[10] and their descendants have offered a powerful way to explore and interrogate corpora far too large for any human to grasp without assistance. Using such algorithms we are able to search for similar documents, model and track the volume of topics over time, search for correlated topics or model them with a hierarchy. Most of these algorithms are intended for use with static corpora where the number of documents and the size of the vocabulary are known in advance. Moreover, almost all current topic modeling algorithms fix the number of topics as one of the input parameters and keep it fixed across the entire corpus. While this is appropriate for static corpora, it becomes a serious handicap when analyzing time-varying data sets where topics come and go as a matter of course. This is doubly true for online algorithms that may not have the option of revising earlier results in light of new data. To be sure, these algorithms will account for changing data one way or another, but without the ability to adapt to structural changes such as entirely new topics they may do so in counterintuitive ways.

  10. Topic Time Series Analysis of Microblogs

    Science.gov (United States)

    2014-10-01

    may be distributed more globally. Tweets on a specific topic that cluster spatially, temporally or both might be of interest to analysts, marketers ...of $ and @, with the latter only in the case that it is the only character in the token (the @ symbol is significant in its usage by Instagram in...is generated by Instagram . Topic 80, Distance: 143.2101 Top words: 1. rawr 2. ˆ0ˆ 3. kill 4. jurassic 5. dinosaur Analysis: This topic is quite

  11. Conctact dermatitis: some important topics.

    Science.gov (United States)

    Pigatto, P D

    2015-11-01

    Allergic contact dermatitis (ACD) is a type IV delayed hypersensitivity reaction. The gold standard for diagnosis is patch testing. The prevalence of positive patch tests in referred patients with suspected ACD ranges from 27 to 95.6%. The relationship between ACD and atopic dermatitis (AD) is complicated with conflicting reports of prevalence in the literature; however, in a patient with dermatitis not responding to traditional therapies, or with new areas of involvement, ACD should be considered as part of the work-up.

  12. Updates of Topical and Local Anesthesia Agents.

    Science.gov (United States)

    Boyce, Ricardo A; Kirpalani, Tarun; Mohan, Naveen

    2016-04-01

    As described in this article, there are many advances in topical and local anesthesia. Topical and local anesthetics have played a great role in dentistry in alleviating the fears of patients, eliminating pain, and providing pain control. Many invasive procedures would not be performed without the use and advances of topical/local anesthetics. The modern-day dentist has the responsibility of knowing the variety of products on the market and should have at least references to access before, during, and after treatment. This practice ensures proper care with topical and local anesthetics for the masses of patients entering dental offices worldwide. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Steam injections wells: topics to consider in casing design of steam injection wells; Revestimento para pocos de vapor

    Energy Technology Data Exchange (ETDEWEB)

    Conceicao, Antonio Carlos Farias [PETROBRAS, Recife, PE (Brazil). Gerencia de Perfuracao do Nordeste. Div. de Operacoes

    1994-07-01

    Steam injection is one of the processes used to increase production from very viscous oil reservoirs. A well is completed at a temperature of about 110 deg F and during steam injection that temperature varies around 600 deg F. Strain or breakdowns may occur to the casing, due to the critical conditions generated by the change of temperature. The usual casing design methods, do not take into account special environmental conditions, such as those which exist for steam injection. From the results of this study we come up to the conclusion that casing grade K-55, heavy weight with premium connections, without pre-stressing and adequately heated, is the best option for steam injection well completion for most of the fields in Brazil. (author)

  14. Revisiting the 'Buy versus Build' decision for publicly owned utilities in California considering wind and geothermal resources; TOPICAL

    International Nuclear Information System (INIS)

    Bolinger, Mark; Wiser, Ryan; Golove, William

    2001-01-01

    The last two decades have seen a dramatic increase in the market share of independent, non-utility generators (NUGs) relative to traditional, utility-owned generation assets. Accordingly, the ''buy versus build'' decision facing utilities--i.e., whether a utility should sign a power purchase agreement (PPA) with a NUG, or develop and own the generation capacity itself-has gained prominence in the industry. Specific debates have revolved around the relative advantages of, the types of risk created by, and the regulatory incentives favoring each approach. Very little of this discussion has focused specifically on publicly owned electric utilities, however, perhaps due to the belief that public power's tax-free financing status leaves little space in which NUGs can compete. With few exceptions (Wiser and Kahn 1996), renewable sources of supply have received similarly scant attention in the buy versus build debate. In this report, we revive the ''buy versus build'' debate and apply it to the two sectors of the industry traditionally underrepresented in the discussion: publicly owned utilities and renewable energy. Contrary to historical treatment, this debate is quite relevant to public utilities and renewables because publicly owned utilities are able to take advantage of some renewable energy incentives only in a ''buy'' situation, while others accrue only in a ''build'' situation. In particular, possible economic advantages of public utility ownership include: (1) the tax-free status of publicly owned utilities and the availability of low-cost debt, and (2) the renewable energy production incentive (REPI) available only to publicly owned utilities. Possible economic advantages to entering into a PPA with a NUG include: (1) the availability of federal tax credits and accelerated depreciation schedules for certain forms of NUG-owned renewable energy, and (2) the California state production incentives available to NUGs but not utilities. This report looks at a publicly owned utility's decision to buy or build new renewable energy capacity-specifically wind or geothermal power-in California. To examine the economic aspects of this decision, we modified and updated a 20-year financial cash-flow model to assess the levelized cost of electricity under four supply options: (1) public utility ownership of new geothermal capacity, (2) public utility ownership of new wind capacity, (3) a PPA for new geothermal capacity, and (4) a PPA for new wind capacity

  15. Topics in the structure of hadronic systems

    International Nuclear Information System (INIS)

    Lebed, R.F.; Lawrence Berkeley Lab., CA

    1994-04-01

    In this dissertation the author examines a variety of different problems in the physics of strongly-bound systems. Each is elucidated by a different standard method of analysis developed to probe the properties of such systems. He begins with an examination of the properties and consequences of the current algebra of weak currents in the limit of heavy quark spin-flavor symmetry. In particular, he examines the assumptions in the proof of the Ademollo-Gatto theorem in general and for spin-flavor symmetry, and exhibit the constraints imposed upon matrix elements by this theorem. Then he utilizes the renormalization-group method to create composite fermions in a three-generation electroweak model. Such a model is found to reproduce the same low energy behavior as the top-condensate electroweak model, although in general it may have strong constraints upon its Higgs sector. Next he uncovers subtleties in the nonrelativistic quark model that drastically alter the picture of the physical origins of meson electromagnetic and hyperfine mass splittings; in particular, the explicit contributions due to (m d -m u ) and electrostatic potentials may be overwhelmed by other effects. Such novel effects are used to explain the anomalous pattern of mass splittings recently measured in bottom mesons. Finally, he considers the topic of baryon masses in heavy fermion chiral perturbation theory, including both tree-level and loop effects

  16. New Mexico High School Supercomputing Challenge, 1990--1995: Five years of making a difference to students, teachers, schools, and communities. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Foster, M.; Kratzer, D.

    1996-02-01

    The New Mexico High School Supercomputing Challenge is an academic program dedicated to increasing interest in science and math among high school students by introducing them to high performance computing. This report provides a summary and evaluation of the first five years of the program, describes the program and shows the impact that it has had on high school students, their teachers, and their communities. Goals and objectives are reviewed and evaluated, growth and development of the program are analyzed, and future directions are discussed.

  17. Topical antifungals for seborrhoeic dermatitis

    Science.gov (United States)

    Okokon, Enembe O; Verbeek, Jos H; Ruotsalainen, Jani H; Ojo, Olumuyiwa A; Bakhoya, Victor Nyange

    2015-01-01

    Background Seborrhoeic dermatitis is a chronic inflammatory skin condition that is distributed worldwide. It commonly affects the scalp, face and flexures of the body. Treatment options include antifungal drugs, steroids, calcineurin inhibitors, keratolytic agents and phototherapy. Objectives To assess the effects of antifungal agents for seborrhoeic dermatitis of the face and scalp in adolescents and adults. A secondary objective is to assess whether the same interventions are effective in the management of seborrhoeic dermatitis in patients with HIV/AIDS. Search methods We searched the following databases up to December 2014: the Cochrane Skin Group Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL) (2014, Issue 11), MEDLINE (from 1946), EMBASE (from 1974) and Latin American Caribbean Health Sciences Literature (LILACS) (from 1982). We also searched trials registries and checked the bibliographies of published studies for further trials. Selection criteria Randomised controlled trials of topical antifungals used for treatment of seborrhoeic dermatitis in adolescents and adults, with primary outcome measures of complete clearance of symptoms and improved quality of life. Data collection and analysis Review author pairs independently assessed eligibility for inclusion, extracted study data and assessed risk of bias of included studies. We performed fixed-effect meta-analysis for studies with low statistical heterogeneity and used a random-effects model when heterogeneity was high. Main results We included 51 studies with 9052 participants. Of these, 45 trials assessed treatment outcomes at five weeks or less after commencement of treatment, and six trials assessed outcomes over a longer time frame. We believe that 24 trials had some form of conflict of interest, such as funding by pharmaceutical companies. Among the included studies were 12 ketoconazole trials (N = 3253), 11 ciclopirox trials (N = 3029), two lithium trials (N = 141

  18. Topics in elementary scattering theory

    International Nuclear Information System (INIS)

    Imrie, D.C.

    1980-01-01

    In these lectures a summary is given of some of the fundamental ideas and formalism used to describe and understand the interactions of elementary particles. A brief review of relativistic kinematics is followed by a discussion of Lorentz-invariant variables for describing two-body processes, phase space and plots, such as the Dalitz plot, which can be used to study some aspects of the dynamics of an interaction, relatively free from kinematic complications. A general description of scattering and decay is given and then, more specifically, some aspects of two-body interactions in the absence of spin are discussed. Finally, complications that arise when particle spin has to be taken into account are considered. (U.K.)

  19. Super-computer architecture

    CERN Document Server

    Hockney, R W

    1977-01-01

    This paper examines the design of the top-of-the-range, scientific, number-crunching computers. The market for such computers is not as large as that for smaller machines, but on the other hand it is by no means negligible. The present work-horse machines in this category are the CDC 7600 and IBM 360/195, and over fifty of the former machines have been sold. The types of installation that form the market for such machines are not only the major scientific research laboratories in the major countries-such as Los Alamos, CERN, Rutherford laboratory-but also major universities or university networks. It is also true that, as with sports cars, innovations made to satisfy the top of the market today often become the standard for the medium-scale computer of tomorrow. Hence there is considerable interest in examining present developments in this area. (0 refs).

  20. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Weingarten, D.

    1986-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating-point processors arrangedin a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics

  1. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1985-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating- point processors arranged in a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable nonblocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics

  2. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1985-01-01

    GF11 is a parallel computer currently under construction at the Yorktown Research Center. The machine incorporates 576 floating-point processors arranged in a modified SIMD architecture. Each processor has space for 2 Mbytes of memory and is capable of 20 MFLOPS, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 GFLOPS. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics, a proposed theory of the elementary particles which participate in nuclear interactions

  3. Algorithms for supercomputers

    International Nuclear Information System (INIS)

    Alder, B.J.

    1986-01-01

    Better numerical procedures, improved computational power and additional physical insights have contributed significantly to progress in dealing with classical and quantum statistical mechanics problems. Past developments are discussed and future possibilities outlined

  4. Algorithms for supercomputers

    International Nuclear Information System (INIS)

    Alder, B.J.

    1985-12-01

    Better numerical procedures, improved computational power and additional physical insights have contributed significantly to progress in dealing with classical and quantum statistical mechanics problems. Past developments are discussed and future possibilities outlined

  5. Anesthesia: A Topic for Interdisciplinary Study.

    Science.gov (United States)

    Labianca, Dominick A.; Reeves, William J.

    1977-01-01

    Describes an interdisciplinary approach for teaching the topic of anesthesia as one aspect of a chemistry-oriented course for nonscience majors which focuses on timely topics such as the energy crisis and drugs. Historical treatment with the examination of literature is emphasized in teaching. (HM)

  6. Exploring Topic Structure: Coherence, Diversity and Relatedness

    NARCIS (Netherlands)

    J. He (Jiyin)

    2011-01-01

    htmlabstractThe use of topical information has long been studied in the context of information retrieval. For example, grouping search results into topical categories enables more effective information presentation to users, while grouping documents in a collection can lead to efficient information

  7. Topical Treatment of Degenerative Knee Osteoarthritis.

    Science.gov (United States)

    Meng, Zengdong; Huang, Rongzhong

    2018-01-01

    This article reviews topical management strategies for degenerative osteoarthritis (OA) of the knee. A search of Pubmed, Embase and the Cochrane library using MeSH terms including "topical," "treatment," "knee" and "osteoarthritis" was carried out. Original research and review articles on the effectiveness and safety, recommendations from international published guidelines and acceptability studies of topical preparations were included. Current topical treatments included for the management of knee OA include topical nonsteroidal anti-inflammatory drugs, capsaicin, salicylates and physical treatments such as hot or cold therapy. Current treatment guidelines recommend topical nonsteroidal anti-inflammatory drugs as an alternative and even first-line therapy for OA management, especially among elderly patients. Guidelines on other topical treatments vary, from recommendations against their use, to in favor as alternative or simultaneous therapy, especially for patients with contraindications to other analgesics. Although often well-tolerated and preferred by many patients, clinical care still lags in the adoption of topical treatments. Aspects of efficacy, safety and patient quality of life data require further research. Copyright © 2018 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.

  8. Topical cholesterol in clofazimine induced ichthyosis

    Directory of Open Access Journals (Sweden)

    Pandey S

    1994-01-01

    Full Text Available Topical application of 10% cholesterol in petrolatum significantly (P< 0.05 controlled the development of ichthyosis in 62 patients taking 100 mg clofazimine daily for a period of 3 months. However, topical cholesterol application did not affect the lowering of serum cholesterol induced by oral clofazimine. Probable mechanism of action is being discussed.

  9. Infantile generalized hypertrichosis caused by topical minoxidil*

    Science.gov (United States)

    Rampon, Greice; Henkin, Caroline; de Souza, Paulo Ricardo Martins; de Almeida Jr, Hiram Larangeira

    2016-01-01

    Rare cases of hypertrichosis have been associated with topically applied minoxidil. We present the first reported case in the Brazilian literature of generalized hypertrichosis affecting a 5-year-old child, following use of minoxidil 5%, 20 drops a day, for hair loss. The laboratory investigation excluded hyperandrogenism and thyroid dysfunction. Topical minoxidil should be used with caution in children. PMID:26982785

  10. Infantile generalized hypertrichosis caused by topical minoxidil.

    Science.gov (United States)

    Rampon, Greice; Henkin, Caroline; de Souza, Paulo Ricardo Martins; Almeida, Hiram Larangeira de

    2016-01-01

    Rare cases of hypertrichosis have been associated with topically applied minoxidil. We present the first reported case in the Brazilian literature of generalized hypertrichosis affecting a 5-year-old child, following use of minoxidil 5%, 20 drops a day, for hair loss. The laboratory investigation excluded hyperandrogenism and thyroid dysfunction. Topical minoxidil should be used with caution in children.

  11. Aerodynamics of wind turbines emerging topics

    CERN Document Server

    Amano, R S

    2014-01-01

    Focusing on Aerodynamics of Wind Turbines with topics ranging from Fundamental to Application of horizontal axis wind turbines, this book presents advanced topics including: Basic Theory for Wind turbine Blade Aerodynamics, Computational Methods, and Special Structural Reinforcement Technique for Wind Turbine Blades.

  12. Topic Prominence in Chinese EFL Learners' Interlanguage

    Science.gov (United States)

    Li, Shaopeng; Yang, Lianrui

    2014-01-01

    The present study aims to investigate the general characteristics of topicprominent typological interlanguage development of Chinese learners of English in terms of acquiring subject-prominent English structures from a discourse perspective. Topic structures mainly appear in Chinese discourse in the form of topic chains (Wang, 2002; 2004). The…

  13. Fostering Topic Knowledge: Essential for Academic Writing

    Science.gov (United States)

    Proske, Antje; Kapp, Felix

    2013-01-01

    Several researchers emphasize the role of the writer's topic knowledge for writing. In academic writing topic knowledge is often constructed by studying source texts. One possibility to support that essential phase of the writing process is to provide interactive learning questions which facilitate the construction of an adequate situation…

  14. Selected topics in neutrino physics

    International Nuclear Information System (INIS)

    Mann, A.K.

    1979-01-01

    Lectures on the contribution of neutrino physics to the recent development in particle physics are presented. In the introductory lecture prospects of investigations of neutrino physics and its application to astrophysics and cosmology are briefly given. Some problems on the ωsub(μ)(anti ωsub(μ))+N → ωsub(μ)(anti ωsub(μ))+X semileptonic inclusiVe reactions and the ωsub(μ)(anti ωsub(μ))+p → ωsub(μ)(anti ωsub(μ))+p elastic semileptonic neUtral current processes are discussed in the second lecture. Particular attention in the third lecture is paid to the ωsub(μ)(anti ωsub(μ))+N →μ - (μ + )+X reactions studied by physicists from Harvard, Pensylvania, Wisconsin and Fermilab. The discrepancy between experiments and theoretical predictions is believed to be connect with systematic errors in their experiments which they have failed to take into account. The last lecture is devoted to dimuon and trimuon production by neutrinos. It is considered that neutrino-induced multimuons are probe of new particle production and decay with a relatively clean process picture and well understood background

  15. 76 FR 81806 - Ophthalmic and Topical Dosage Form New Animal Drugs; Ivermectin Topical Solution

    Science.gov (United States)

    2011-12-29

    .... FDA-2011-N-0003] Ophthalmic and Topical Dosage Form New Animal Drugs; Ivermectin Topical Solution... solution of ivermectin. DATES: This rule is effective December 29, 2011. FOR FURTHER INFORMATION CONTACT... ANADA 200-318 for [[Page 81807

  16. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  17. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu

    2013-07-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  18. Recent advances and perspectives in topical oral anesthesia.

    Science.gov (United States)

    Franz-Montan, Michelle; Ribeiro, Lígia Nunes de Morais; Volpato, Maria Cristina; Cereda, Cintia Maria Saia; Groppo, Francisco Carlos; Tofoli, Giovana Randomille; de Araújo, Daniele Ribeiro; Santi, Patrizia; Padula, Cristina; de Paula, Eneida

    2017-05-01

    Topical anesthesia is widely used in dentistry to reduce pain caused by needle insertion and injection of the anesthetic. However, successful anesthesia is not always achieved using the formulations that are currently commercially available. As a result, local anesthesia is still one of the procedures that is most feared by dental patients. Drug delivery systems (DDSs) provide ways of improving the efficacy of topical agents. Areas covered: An overview of the structure and permeability of oral mucosa is given, followed by a review of DDSs designed for dental topical anesthesia and their related clinical trials. Chemical approaches to enhance permeation and anesthesia efficacy, or to promote superficial anesthesia, include nanostructured carriers (liposomes, cyclodextrins, polymeric nanoparticle systems, solid lipid nanoparticles, and nanostructured lipid carriers) and different pharmaceutical dosage forms (patches, bio- and mucoadhesive systems, and hydrogels). Physical methods include pre-cooling, vibration, iontophoresis, and microneedle arrays. Expert opinion: The combination of different chemical and physical methods is an attractive option for effective topical anesthesia in oral mucosa. This comprehensive review should provide the readers with the most relevant options currently available to assist pain-free dental anesthesia. The findings should be considered for future clinical trials.

  19. Topical Session on the Decommissioning and Dismantling Safety Case

    International Nuclear Information System (INIS)

    2002-01-01

    Set up by the Radioactive Waste Management Committee (RWMC), the WPDD brings together senior representatives of national organisations who have a broad overview of Decommissioning and Dismantling (D and D) issues through their work as regulators, implementers, R and D experts or policy makers. These include representatives from regulatory authorities, industrial decommissioners from the NEA Cooperative Programme on Exchange of Scientific and Technical Information on Nuclear Installation Decommissioning Projects (CPD), and cross-representation from the NEA Committee on Nuclear Regulatory Activities, the Committee on Radiation Protection and Public Health, and the RWMC. The EC is a member of the WPDD and the IAEA also participates. This ensures co-ordination amongst activities in these international programmes. Participation from civil society organisations is considered on a case by case basis, and has already taken place through the active involvement of the Group of Municipalities with Nuclear Installations at the first meeting of the WPDD At its second meeting, in Paris, 5-7 December 2001, the WPDD held two topical sessions on the D and D Safety Case and on the Management of Materials from D and D, respectively. This report documents the topical session on the safety case. The topical session was meant to provide an exchange of information and experience on the following issues: What topics should be included in a safety case? Of what should it consist? Is there sufficient and complete guidance nationally and internationally? How do practices differ internationally? Main boundary condition to this session was that it would deal with plants where spent fuel has been removed. Also the topical sessions was kept at a level that makes the most of the varied constituency of the WPDD. Namely, interface issues are important, and issue-identification and discussion was the immediate goal. There was less interest in examining areas where variability amongst national

  20. Computational fluid dynamics: complex flows requiring supercomputers. January 1975-July 1988 (Citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Report for January 1975-July 1988

    International Nuclear Information System (INIS)

    1988-08-01

    This bibliography contains citations concerning computational fluid dynamics (CFD), a new method in computational science to perform complex flow simulations in three dimensions. Applications include aerodynamic design and analysis for aircraft, rockets, and missiles, and automobiles; heat-transfer studies; and combustion processes. Included are references to supercomputers, array processors, and parallel processors where needed for complete, integrated design. Also included are software packages and grid-generation techniques required to apply CFD numerical solutions. Numerical methods for fluid dynamics, not requiring supercomputers, are found in a separate published search. (Contains 83 citations fully indexed and including a title list.)

  1. Problematic topic transitions in dysarthric conversation.

    Science.gov (United States)

    Bloch, Steven; Saldert, Charlotta; Ferm, Ulrika

    2015-01-01

    This study examined the nature of topic transition problems associated with acquired progressive dysarthric speech in the everyday conversation of people with motor neurone disease. Using conversation analytic methods, a video collection of five naturally occurring problematic topic transitions was identified, transcribed and analysed. These were extracted from a main collection of over 200 other-initiated repair sequences and a sub-set of 15 problematic topic transition sequences. The sequences were analysed with reference to how the participants both identified and resolved the problems. Analysis revealed that topic transition by people with dysarthria can prove problematic. Conversation partners may find transitions problematic not only because of speech intelligibility but also because of a sequential disjuncture between the dysarthric speech turn and whatever topic has come prior. In addition the treatment of problematic topic transition as a complaint reveals the potential vulnerability of people with dysarthria to judgements of competence. These findings have implications for how dysarthria is conceptualized and how specific actions in conversation, such as topic transition, might be suitable targets for clinical intervention.

  2. Inertial confinement fusion and related topics

    International Nuclear Information System (INIS)

    Starodub, A. N.

    2007-01-01

    The current state of different approaches (laser fusion, light and heavy ions, electron beam) to the realization of inertial confinement fusion is considered. From comparative analysis a conclusion is made that from the viewpoint of physics, technology, safety, and economics the most realistic way to future energetics is an electric power plant based on a hybrid fission-fusion reactor which consists of an external source of neutrons (based on laser fusion) and a subcritical two-cascade nuclear blanket, which yields the energy under the action of 14 MeV neutrons. The main topics on inertial confinement fusion such as the energy driver, the interaction between plasmas and driver beam, the target design are discussed. New concept of creation of a laser driver for IFE based on generation and amplification of radiation with controllable coherence is reported. The performed studies demonstrate that the laser based on generation and amplification of radiation with controllable coherence (CCR laser) has a number of advantages as compared to conventional schemes of lasers. The carried out experiments have shown a possibility of suppression of small-scale self-focusing, formation of laser radiation pulses with required characteristics, simplification of an optical scheme of the laser, good matching of laser-target system and achievement of homogeneous irradiation and high output laser energy density without using traditional correcting systems (phase plates, adaptive optics, space filters etc.). The results of the latest experiments to reach ultimate energy characteristics of the developed laser system are also reported. Recent results from the experiments aimed at studying of the physical processes in targets under illumination by the laser with controllable coherence of radiation are presented and discussed, especially such important laser-matter interaction phenomena as absorption and scattering of the laser radiation, the laser radiation harmonic generation, X

  3. Topics in supergravity and string theory

    International Nuclear Information System (INIS)

    Eastaugh, A.G.

    1987-01-01

    The first topic covered in this dissertation concerns the harmonic expansion technique and its application to the dimensional compactification of higher dimensional supergravity. A simple example is given to explain the method and then the method is applied to the problem of obtaining the mass spectrum of the squashed seven-sphere compactification of eleven dimensional supergravity. The second topic concerns the application of Fujikawa's method of anomaly calculation to the calculation of the critical dimension of various string models. The third topic is a study and explicit calculation of the Fock space representation of the vertex in Witten's formulation of the interacting open bosonic string field theory

  4. Control of pain with topical plant medicines

    Directory of Open Access Journals (Sweden)

    James David Adams Jr.

    2015-04-01

    Full Text Available Pain is normally treated with oral nonsteroidal anti-inflammatory agents and opioids. These drugs are dangerous and are responsible for many hospitalizations and deaths. It is much safer to use topical preparations made from plants to treat pain, even severe pain. Topical preparations must contain compounds that penetrate the skin, inhibit pain receptors such as transient receptor potential cation channels and cyclooxygenase-2, to relieve pain. Inhibition of pain in the skin disrupts the pain cycle and avoids exposure of internal organs to large amounts of toxic compounds. Use of topical pain relievers has the potential to save many lives, decrease medical costs and improve therapy.

  5. Digital Social Network Mining for Topic Discovery

    Science.gov (United States)

    Moradianzadeh, Pooya; Mohi, Maryam; Sadighi Moshkenani, Mohsen

    Networked computers are expanding more and more around the world, and digital social networks becoming of great importance for many people's work and leisure. This paper mainly focused on discovering the topic of exchanging information in digital social network. In brief, our method is to use a hierarchical dictionary of related topics and words that mapped to a graph. Then, with comparing the extracted keywords from the context of social network with graph nodes, probability of relation between context and desired topics will be computed. This model can be used in many applications such as advertising, viral marketing and high-risk group detection.

  6. Stress shadows - a controversial topic

    Science.gov (United States)

    Lasocki, Stanislaw; Karakostas, Vassilis G.; Papadimitriou, Eletheria E.; Orlecka-Sikora, Beata

    2010-05-01

    the sign of the change though distinctly more in areas of positive than of negative change. In the case of seismicity accompanying underground mining exploitation the coseismic stress changes expressed in terms of the Coulomb failure function are at least of one order smaller than those for earthquakes. Furthermore, they are only a small component of the total stress field variations in mining rockmass, which are mainly controlled by the mining process. Nevertheless, our studies of the induced seismicity in the Rudna mine in the Legnica-Głogow Copper District in Poland showed that the influence of the Coulomb stress changes on locations of subsequent events was statistically significant. We analyzed series of seismic events quantifying the triggering and inhibiting effect by the proportion of events in the series whose locations were consistent with the stress increased and stress decreased zones, respectively. It was found out that more than 60 per-cent of the analyzed seismic events occurred in areas where stress was enhanced due to the occurrence of previous events. The significance of this result was determined by comparing it with 2000 results of the same analysis carried out on the random permutations of the original series of events. The test indicated that the locations in positive stress changes areas were preferred statistically significantly when the stress changes exceeded 0.05 bar. However, no statistically significant inhibiting effect of negative static stress changes, within the considered range of these changes, was ascertained. Here we present details of these two studies and discuss possible reasons behind the negative conclusions on the existence of stress shadows.

  7. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    OpenAIRE

    Sung-Chien Lin

    2014-01-01

    In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results ...

  8. Corneal Neurotoxicity Due to Topical Benzalkonium Chloride

    OpenAIRE

    Sarkar, Joy; Chaudhary, Shweta; Namavari, Abed; Ozturk, Okan; Chang, Jin-Hong; Yco, Lisette; Sonawane, Snehal; Khanolkar, Vishakha; Hallak, Joelle; Jain, Sandeep

    2012-01-01

    Topical application of benzalkonium chloride (BAK) to the eye causes dose-related corneal neurotoxicity. Corneal inflammation and reduction in aqueous tear production accompany neurotoxicity. Cessation of BAK treatment leads to recovery of corneal nerve density.

  9. 152 TOPICALIZATION AND PASSIVISATION IN THE ENGLISH ...

    African Journals Online (AJOL)

    It is believed that this study will further highlight some of the basic ... The objective of the study is to explore topicalization and passivisation in the English Language ..... Radford, R. Transformational Syntax A Student Guide to Chomsky's ...

  10. SETI: A good introductory physics topic

    Science.gov (United States)

    Hobson, Art

    1997-04-01

    If America is to achieve the science literacy that is essential to industrialized democracy, all students must study such topics as scientific methodology, pseudoscience, ozone depletion, and global warming. My large-enrollment liberal-arts physics course covers the great principles of physics along with several such philosophical and societal topics. It is easy to include the interdisciplinary context of physics in courses for non-scientists, because these courses are flexible, conceptual, and taught to students whose interests span a broad range. Students find these topics relevant and fascinating, leading to large enrollments by non-scientists even in courses labeled ''physics.'' I will discuss my approach to teaching the search for extra-terrestrial intelligence (SETI), a topic with lots of good physics and with connections to scientific methodology and pseudoscience. A textbook for this kind of course has been published, Physics: Concepts and Connections (Prentice-Hall, 1995).

  11. Section 608 Technician Certification Test Topics

    Science.gov (United States)

    Identifies some of the topics covered on Section 608 Technician Certification tests such as ozone depletion, the Clean Air Act and Montreal Protocol, Substitute Refrigerants and oils, Refrigeration and Recovery Techniques.

  12. Web Enabled DROLS Verity TopicSets

    National Research Council Canada - National Science Library

    Tong, Richard

    1999-01-01

    The focus of this effort has been the design and development of automatically generated TopicSets and HTML pages that provide the basis of the required search and browsing capability for DTIC's Web Enabled DROLS System...

  13. Atopic dermatitis: tacrolimus vs. topical corticosteroid use

    African Journals Online (AJOL)

    Atopic dermatitis (AD) is an inflammatory skin disease that is characterised .... effective in the treatment of AD.5. Although ..... original steroid preparations,20 the cost-effectiveness of ... Topical corticosteroids [homepage on the Internet]. c2010.

  14. Gastrointestinal Bleeding: MedlinePlus Health Topic

    Science.gov (United States)

    ... are many possible causes of GI bleeding, including hemorrhoids , peptic ulcers , tears or inflammation in the esophagus, ... blood Show More Show Less Related Health Topics Hemorrhoids Peptic Ulcer National Institutes of Health The primary ...

  15. Birth Weight: MedlinePlus Health Topic

    Science.gov (United States)

    ... growth restriction Large for gestational age (LGA) Neonatal weight gain and nutrition Small for gestational age (SGA) Related Health Topics Fetal Health and Development Premature Babies Uncommon Infant and Newborn Problems National Institutes of Health The primary NIH ...

  16. Data Mining Thesis Topics in Finland

    OpenAIRE

    Bajo Rouvinen, Ari

    2017-01-01

    The Theseus open repository contains metadata about more than 100,000 thesis publications from the different universities of applied sciences in Finland. Different data mining techniques were applied to the Theseus dataset to build a web application to explore thesis topics and degree programmes using different libraries in Python and JavaScript. Thesis topics were extracted from manually annotated keywords by the authors and curated subjects by the librarians. During the project, the quality...

  17. Topical reports on Louisiana salt domes

    International Nuclear Information System (INIS)

    1983-09-01

    The Institute for Environmental Studies at Louisiana State University conducted research into the potential use of Louisiana salt domes for disposal of nuclear waste material. Topical reports generated in 1981 and 1982 related to Vacherie and Rayburn's domes are compiled and presented, which address palynological studies, tiltmeter monitoring, precise releveling, saline springs, and surface hydrology. The latter two are basically a compilation of references related to these topics. Individual reports are abstracted

  18. Bare quantifier fronting as contrastive topicalization

    Directory of Open Access Journals (Sweden)

    Ion Giurgea

    2015-11-01

    Full Text Available I argue that indefinites (in particular bare quantifiers such as ‘something’, ‘somebody’, etc. which are neither existentially presupposed nor in the restriction of a quantifier over situations, can undergo topicalization in a number of Romance languages (Catalan, Italian, Romanian, Spanish, but only if the sentence contains “verum” focus, i.e. focus on a high degree of certainty of the sentence. I analyze these indefinites as contrastive topics, using Büring’s (1999 theory (where the term ‘S-topic’ is used for what I call ‘contrastive topic’. I propose that the topic is evaluated in relation to a scalar set including generalized quantifiers such as {lP $x P(x, lP MANYx P(x, lP MOSTx P(x, lP “xP(x} or {lP $xP(x, lP P(a, lP P(b …}, and that the contrastive topic is the weakest generalized quantifier in this set. The verum focus, which is part of the “comment” that co-occurs with the “Topic”, introduces a set of alternatives including degrees of certainty of the assertion. The speaker asserts that his claim is certainly true or highly probable, contrasting it with stronger claims for which the degree of probability is unknown. This explains the observation that in downward entailing contexts, the fronted quantified DPs are headed by ‘all’ or ‘many’, whereas ‘some’, small numbers or ‘at least n’ appear in upward entailing contexts. Unlike other cases of non-specific topics, which are property topics, these are quantifier topics: the topic part is a generalized quantifier, the comment is a property of generalized quantifiers. This explains the narrow scope of the fronted quantified DP.

  19. Is topical haloperidol a useful glaucoma treatment?

    OpenAIRE

    Lavin, M. J.; Andrews, V.

    1986-01-01

    A randomised, double blind, single dose study of topical haloperidol, a dopamine receptor blocking drug, was performed on 20 healthy volunteers. After its administration a modest reduction in intraocular pressure was recorded over the six-hour study period, but the difference was not significant at the p less than 0.05 level. Although dopamine blocking agents are effective in reducing intraocular pressure in experimental animals, topical haloperidol appears unlikely to be clinically useful in...

  20. Selected topics in e+e- physics

    International Nuclear Information System (INIS)

    Sau Lan Wu

    1981-01-01

    Selected topics of recent experimental results from the high-energy electron-positron storage rings are presented. The topics include some of the tau and charm physics from SPEAR, the upsilon physics from DORIS and CESR, and the γγ physics and quark and gluon physics from the PLUTO and TASSO Collaborations at PETRA. Related data from the JADE and MARK J Collaborations at PETRA are discussed in separated papers at this school. (orig.)

  1. Topical Melatonin for Treatment of Androgenetic Alopecia

    OpenAIRE

    Fischer, Tobias W; Tr?eb, Ralph M; H?nggi, Gabriella; Innocenti, Marcello; Elsner, Peter

    2012-01-01

    Background: In the search for alternative agents to oral finasteride and topical minoxidil for the treatment of androgenetic alopecia (AGA), melatonin, a potent antioxidant and growth modulator, was identified as a promising candidate based on in vitro and in vivo studies. Materials and Methods: One pharmacodynamic study on topical application of melatonin and four clinical pre-post studies were performed in patients with androgenetic alopecia or general hair loss and evaluated by standardise...

  2. Epidemic spread in bipartite network by considering risk awareness

    Science.gov (United States)

    Han, She; Sun, Mei; Ampimah, Benjamin Chris; Han, Dun

    2018-02-01

    Human awareness plays an important role in the spread of infectious diseases and the control of propagation patterns. Exploring the interplay between human awareness and epidemic spreading is a topic that has been receiving increasing attention. Considering the fact, some well-known diseases only spread between different species we propose a theoretical analysis of the Susceptible-Infected-Susceptible (SIS) epidemic spread from the perspective of bipartite network and risk aversion. Using mean field theory, the epidemic threshold is calculated theoretically. Simulation results are consistent with the proposed analytic model. The results show that, the final infection density is negative linear with the value of individuals' risk awareness. Therefore, the epidemic spread could be effectively suppressed by improving individuals' risk awareness.

  3. Topical steroid addiction in atopic dermatitis

    Directory of Open Access Journals (Sweden)

    Fukaya M

    2014-10-01

    Full Text Available Mototsugu Fukaya,1 Kenji Sato,2 Mitsuko Sato,3 Hajime Kimata,4 Shigeki Fujisawa,5 Haruhiko Dozono,6 Jun Yoshizawa,7 Satoko Minaguchi8 1Tsurumai Kouen Clinic, Nagoya, 2Department of Dermatology, Hannan Chuo Hospital, Osaka, 3Sato Pediatric Clinic, Osaka, 4Kimata Hajime Clinic, Osaka, 5Fujisawa Dermatology Clinic, Tokyo, 6Dozono Medical House, Kagoshima, 7Yoshizawa Dermatology Clinic, Yokohama, 8Department of Dermatology, Kounosu Kyousei Hospital, Saitama, Japan Abstract: The American Academy of Dermatology published a new guideline regarding topical therapy in atopic dermatitis in May 2014. Although topical steroid addiction or red burning skin syndrome had been mentioned as possible side effects of topical steroids in a 2006 review article in the Journal of the American Academy of Dermatology, no statement was made regarding this illness in the new guidelines. This suggests that there are still controversies regarding this illness. Here, we describe the clinical features of topical steroid addiction or red burning skin syndrome, based on the treatment of many cases of the illness. Because there have been few articles in the medical literature regarding this illness, the description in this article will be of some benefit to better understand the illness and to spur discussion regarding topical steroid addiction or red burning skin syndrome. Keywords: topical steroid addiction, atopic dermatitis, red burning skin syndrome, rebound, corticosteroid, eczema

  4. Alternative Cancer Treatments: 10 Options to Consider

    Science.gov (United States)

    Alternative cancer treatments: 10 options to consider Alternative cancer treatments can't cure your cancer, but they may provide some ... that may help them, including complementary and alternative cancer treatments. If cancer makes you feel as if you ...

  5. Dynamic cellular manufacturing system design considering ...

    Indian Academy of Sciences (India)

    Kamal Deep

    cellular manufacturing system in a company is division of ... designed to be assembled from a small number of stan- ..... contingency part process route in addition to the alternate .... istic industrial manufacturing vision considering multiple.

  6. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    Science.gov (United States)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  7. Incorporating Topic Assignment Constraint and Topic Correlation Limitation into Clinical Goal Discovering for Clinical Pathway Mining

    Directory of Open Access Journals (Sweden)

    Xiao Xu

    2017-01-01

    Full Text Available Clinical pathways are widely used around the world for providing quality medical treatment and controlling healthcare cost. However, the expert-designed clinical pathways can hardly deal with the variances among hospitals and patients. It calls for more dynamic and adaptive process, which is derived from various clinical data. Topic-based clinical pathway mining is an effective approach to discover a concise process model. Through this approach, the latent topics found by latent Dirichlet allocation (LDA represent the clinical goals. And process mining methods are used to extract the temporal relations between these topics. However, the topic quality is usually not desirable due to the low performance of the LDA in clinical data. In this paper, we incorporate topic assignment constraint and topic correlation limitation into the LDA to enhance the ability of discovering high-quality topics. Two real-world datasets are used to evaluate the proposed method. The results show that the topics discovered by our method are with higher coherence, informativeness, and coverage than the original LDA. These quality topics are suitable to represent the clinical goals. Also, we illustrate that our method is effective in generating a comprehensive topic-based clinical pathway model.

  8. Comparing Consider-Covariance Analysis with Sigma-Point Consider Filter and Linear-Theory Consider Filter Formulations

    Science.gov (United States)

    Lisano, Michael E.

    2007-01-01

    Recent literature in applied estimation theory reflects growing interest in the sigma-point (also called unscented ) formulation for optimal sequential state estimation, often describing performance comparisons with extended Kalman filters as applied to specific dynamical problems [c.f. 1, 2, 3]. Favorable attributes of sigma-point filters are described as including a lower expected error for nonlinear even non-differentiable dynamical systems, and a straightforward formulation not requiring derivation or implementation of any partial derivative Jacobian matrices. These attributes are particularly attractive, e.g. in terms of enabling simplified code architecture and streamlined testing, in the formulation of estimators for nonlinear spaceflight mechanics systems, such as filter software onboard deep-space robotic spacecraft. As presented in [4], the Sigma-Point Consider Filter (SPCF) algorithm extends the sigma-point filter algorithm to the problem of consider covariance analysis. Considering parameters in a dynamical system, while estimating its state, provides an upper bound on the estimated state covariance, which is viewed as a conservative approach to designing estimators for problems of general guidance, navigation and control. This is because, whether a parameter in the system model is observable or not, error in the knowledge of the value of a non-estimated parameter will increase the actual uncertainty of the estimated state of the system beyond the level formally indicated by the covariance of an estimator that neglects errors or uncertainty in that parameter. The equations for SPCF covariance evolution are obtained in a fashion similar to the derivation approach taken with standard (i.e. linearized or extended) consider parameterized Kalman filters (c.f. [5]). While in [4] the SPCF and linear-theory consider filter (LTCF) were applied to an illustrative linear dynamics/linear measurement problem, in the present work examines the SPCF as applied to

  9. Dressings and topical agents for preventing pressure ulcers.

    Science.gov (United States)

    Moore, Zena E H; Webster, Joan

    2013-08-18

    Pressure ulcers, which are localised injury to the skin, or underlying tissue or both, occur when people are unable to reposition themselves to relieve pressure on bony prominences. Pressure ulcers are often difficult to heal, painful and impact negatively on the individual's quality of life. The cost implications of pressure ulcer treatment are considerable, compounding the challenges in providing cost effective, efficient health services. Efforts to prevent the development of pressure ulcers have focused on nutritional support, pressure redistributing devices, turning regimes and the application of various topical agents and dressings designed to maintain healthy skin, relieve pressure and prevent shearing forces. Although products aimed at preventing pressure ulcers are widely used, it remains unclear which, if any, of these approaches are effective in preventing the development of pressure ulcers. To evaluate the effects of dressings and topical agents on the prevention of pressure ulcers, in people of any age without existing pressure ulcers, but considered to be at risk of developing a pressure ulcer, in any healthcare setting. In February 2013 we searched the following electronic databases to identify reports of relevant randomised clinical trials (RCTs): the Cochrane Wounds Group Specialised Register; the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library); Database of Abstracts of Reviews of Effects (The Cochrane Library); Ovid MEDLINE; Ovid MEDLINE (In-Process & Other Non-Indexed Citations); Ovid EMBASE; and EBSCO CINAHL. We included RCTs evaluating the use of dressings, topical agents, or topical agents with dressings, compared with a different dressing, topical agent, or combined topical agent and dressing, or no intervention or standard care, with the aim of preventing the development of a pressure ulcer. We assessed trials for their appropriateness for inclusion and for their risk of bias. This was done by two review

  10. Summaries of research and development activities by using supercomputer system of JAEA in FY2015. April 1, 2015 - March 31, 2016

    International Nuclear Information System (INIS)

    2017-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As shown in the fact that about 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology. In FY2015, the system was used for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue, as well as for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great number of R and D results accomplished by using the system in FY2015, as well as user support, operational records and overviews of the system, and so on. (author)

  11. Summaries of research and development activities by using supercomputer system of JAEA in FY2014. April 1, 2014 - March 31, 2015

    International Nuclear Information System (INIS)

    2016-02-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As shown in the fact that about 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology. In FY2014, the system was used for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue, as well as for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great number of R and D results accomplished by using the system in FY2014, as well as user support, operational records and overviews of the system, and so on. (author)

  12. Summaries of research and development activities by using supercomputer system of JAEA in FY2013. April 1, 2013 - March 31, 2014

    International Nuclear Information System (INIS)

    2015-02-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. About 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2013, the system was used not only for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science, but also for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue. This report presents a great amount of R and D results accomplished by using the system in FY2013, as well as user support, operational records and overviews of the system, and so on. (author)

  13. Summaries of research and development activities by using supercomputer system of JAEA in FY2012. April 1, 2012 - March 31, 2013

    International Nuclear Information System (INIS)

    2014-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As more than 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2012, the system was used not only for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science, but also for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as apriority issue. This report presents a great amount of R and D results accomplished by using the system in FY2012, as well as user support, operational records and overviews of the system, and so on. (author)

  14. Summaries of research and development activities by using supercomputer system of JAEA in FY2011. April 1, 2011 - March 31, 2012

    International Nuclear Information System (INIS)

    2013-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As more than 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2011, the system was used for analyses of the accident at the Fukushima Daiichi Nuclear Power Station and establishment of radioactive decontamination plan, as well as the JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great amount of R and D results accomplished by using the system in FY2011, as well as user support structure, operational records and overviews of the system, and so on. (author)

  15. Sandia`s network for Supercomputing `94: Linking the Los Alamos, Lawrence Livermore, and Sandia National Laboratories using switched multimegabit data service

    Energy Technology Data Exchange (ETDEWEB)

    Vahle, M.O.; Gossage, S.A.; Brenkosh, J.P. [Sandia National Labs., Albuquerque, NM (United States). Advanced Networking Integration Dept.

    1995-01-01

    Supercomputing `94, a high-performance computing and communications conference, was held November 14th through 18th, 1994 in Washington DC. For the past four years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1994 conference, Sandia built a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second linking its private SMDS network between its facilities in Albuquerque, New Mexico and Livermore, California to the convention center in Washington, D.C. For the show, the network was also extended from Sandia, New Mexico to Los Alamos National Laboratory and from Sandia, California to Lawrence Livermore National Laboratory. This paper documents and describes this network and how it was used at the conference.

  16. Topics in linear optical quantum computation

    Science.gov (United States)

    Glancy, Scott Charles

    This thesis covers several topics in optical quantum computation. A quantum computer is a computational device which is able to manipulate information by performing unitary operations on some physical system whose state can be described as a vector (or mixture of vectors) in a Hilbert space. The basic unit of information, called the qubit, is considered to be a system with two orthogonal states, which are assigned logical values of 0 and 1. Photons make excellent candidates to serve as qubits. They have little interactions with the environment. Many operations can be performed using very simple linear optical devices such as beam splitters and phase shifters. Photons can easily be processed through circuit-like networks. Operations can be performed in very short times. Photons are ideally suited for the long-distance communication of quantum information. The great difficulty in constructing an optical quantum computer is that photons naturally interact weakly with one another. This thesis first gives a brief review of two early approaches to optical quantum computation. It will describe how any discrete unitary operation can be performed using a single photon and a network of beam splitters, and how the Kerr effect can be used to construct a two photon logic gate. Second, this work provides a thorough introduction to the linear optical quantum computer developed by Knill, Laflamme, and Milburn. It then presents this author's results on the reliability of this scheme when implemented using imperfect photon detectors. This author finds that quantum computers of this sort cannot be built using current technology. Third, this dissertation describes a method for constructing a linear optical quantum computer using nearly orthogonal coherent states of light as the qubits. It shows how a universal set of logic operations can be performed, including calculations of the fidelity with which these operations may be accomplished. It discusses methods for reducing and

  17. Learning topic models by belief propagation.

    Science.gov (United States)

    Zeng, Jia; Cheung, William K; Liu, Jiming

    2013-05-01

    Latent Dirichlet allocation (LDA) is an important hierarchical Bayesian model for probabilistic topic modeling, which attracts worldwide interest and touches on many important applications in text mining, computer vision and computational biology. This paper represents the collapsed LDA as a factor graph, which enables the classic loopy belief propagation (BP) algorithm for approximate inference and parameter estimation. Although two commonly used approximate inference methods, such as variational Bayes (VB) and collapsed Gibbs sampling (GS), have gained great success in learning LDA, the proposed BP is competitive in both speed and accuracy, as validated by encouraging experimental results on four large-scale document datasets. Furthermore, the BP algorithm has the potential to become a generic scheme for learning variants of LDA-based topic models in the collapsed space. To this end, we show how to learn two typical variants of LDA-based topic models, such as author-topic models (ATM) and relational topic models (RTM), using BP based on the factor graph representations.

  18. Antimicrobial topical agents used in the vagina.

    Science.gov (United States)

    Frey Tirri, Brigitte

    2011-01-01

    Vaginally applied antimicrobial agents are widely used in the vagina in women with lower genital tract infections. An 'antimicrobial' is a general term that refers to a group of drugs that are effective against bacteria, fungi, viruses and protozoa. Topical treatments can be prescribed for a wide variety of vaginal infections. Many bacterial infections, such as bacterial vaginosis, desquamative inflammatory vaginitis or, as some European authors call it, aerobic vaginitis as well as infection with Staphylococcus aureus or group A streptococci, may be treated in this way. Candida vulvovaginitis is a fungal infection that is very amenable to topical treatment. The most common viral infections which can be treated with topical medications are condylomata acuminata and herpes simplex. The most often encountered protozoal vaginitis, which is caused by Trichomonas vaginalis, may be susceptible to topical medications, although this infection is treated systemically. This chapter covers the wide variety of commonly used topical antimicrobial agents for these diseases and focuses on the individual therapeutic agents and their clinical efficacy. In addition, potential difficulties that can occur in practice, as well as the usage of these medications in the special setting of pregnancy, are described in this chapter. Copyright © 2011 S. Karger AG, Basel.

  19. Development and Evaluation of Topical Gabapentin Formulations

    Directory of Open Access Journals (Sweden)

    Christopher J. Martin

    2017-08-01

    Full Text Available Topical delivery of gabapentin is desirable to treat peripheral neuropathic pain conditions whilst avoiding systemic side effects. To date, reports of topical gabapentin delivery in vitro have been variable and dependent on the skin model employed, primarily involving rodent and porcine models. In this study a variety of topical gabapentin formulations were investigated, including Carbopol® hydrogels containing various permeation enhancers, and a range of proprietary bases including a compounded Lipoderm® formulation; furthermore microneedle facilitated delivery was used as a positive control. Critically, permeation of gabapentin across a human epidermal membrane in vitro was assessed using Franz-type diffusion cells. Subsequently this data was contextualised within the wider scope of the literature. Although reports of topical gabapentin delivery have been shown to vary, largely dependent upon the skin model used, this study demonstrated that 6% (w/w gabapentin 0.75% (w/w Carbopol® hydrogels containing 5% (w/w DMSO or 70% (w/w ethanol and a compounded 10% (w/w gabapentin Lipoderm® formulation were able to facilitate permeation of the molecule across human skin. Further pre-clinical and clinical studies are required to investigate the topical delivery performance and pharmacodynamic actions of prospective formulations.

  20. Development and Evaluation of Topical Gabapentin Formulations

    Science.gov (United States)

    Alcock, Natalie; Hiom, Sarah; Birchall, James C.

    2017-01-01

    Topical delivery of gabapentin is desirable to treat peripheral neuropathic pain conditions whilst avoiding systemic side effects. To date, reports of topical gabapentin delivery in vitro have been variable and dependent on the skin model employed, primarily involving rodent and porcine models. In this study a variety of topical gabapentin formulations were investigated, including Carbopol® hydrogels containing various permeation enhancers, and a range of proprietary bases including a compounded Lipoderm® formulation; furthermore microneedle facilitated delivery was used as a positive control. Critically, permeation of gabapentin across a human epidermal membrane in vitro was assessed using Franz-type diffusion cells. Subsequently this data was contextualised within the wider scope of the literature. Although reports of topical gabapentin delivery have been shown to vary, largely dependent upon the skin model used, this study demonstrated that 6% (w/w) gabapentin 0.75% (w/w) Carbopol® hydrogels containing 5% (w/w) DMSO or 70% (w/w) ethanol and a compounded 10% (w/w) gabapentin Lipoderm® formulation were able to facilitate permeation of the molecule across human skin. Further pre-clinical and clinical studies are required to investigate the topical delivery performance and pharmacodynamic actions of prospective formulations. PMID:28867811

  1. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    Directory of Open Access Journals (Sweden)

    Sung-Chien Lin

    2014-07-01

    Full Text Available In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results of this study show that when the number of topics was set to 10, the topic model has the smallest perplexity. Although data scopes and analysis methodsare different to previous studies, the generating topics of this study are consistent with those results produced by analyses of experts. Empirical case studies and measurements of bibliometric indicators were concerned important in every year during the whole analytic period, and the field was increasing stability. Both the two core journals broadly paid more attention to all of the topics in the field of Informetrics. The Journal of Informetricsput particular emphasis on construction and applications ofbibliometric indicators and Scientometrics focused on the evaluation and the factors of productivity of countries, institutions, domains, and journals.

  2. Factors to Consider When Designing Television Pictorials

    Science.gov (United States)

    Trohanis, Pascal; Du Monceau, Michael

    1971-01-01

    The authors have developed a framework for improving the visual communication element of television. After warning that seeing is not enough to insure learning they discuss the five pre-production components which research indicates should be considered when designing television pictorials." (Editor)

  3. Considering play : From method to analysis

    NARCIS (Netherlands)

    van Vught, J.F.|info:eu-repo/dai/nl/413532682; Glas, M.A.J.|info:eu-repo/dai/nl/330981447

    2017-01-01

    This paper deals with play as an important methodological issue when studying games as texts and is intended as a practical methodological guide. After considering text as both the structuring object as well as its plural processual activations, we argue that different methodological considerations

  4. Considering Bilingual Dictionaries Against a Corpus.

    African Journals Online (AJOL)

    The research reported on here was made possible by the Research Fund of the .... has this meaning, CONSIDER is followed by a that-clause (6). The patterns exemplified in (7) and (8) are rarer. (7) is an example of free direct speech and.

  5. 40 CFR 227.21 - Uses considered.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Uses considered. 227.21 Section 227.21 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) OCEAN DUMPING CRITERIA FOR THE... living marine resources; (k) Actual or anticipated exploitation of non-living resources, including...

  6. 40 CFR 227.18 - Factors considered.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Factors considered. 227.18 Section 227.18 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) OCEAN DUMPING CRITERIA FOR... in the material of any constituents which might significantly affect living marine resources of...

  7. Lecture Notes on Topics in Accelerator Physics

    Energy Technology Data Exchange (ETDEWEB)

    Chao, Alex W.

    2002-11-15

    These are lecture notes that cover a selection of topics, some of them under current research, in accelerator physics. I try to derive the results from first principles, although the students are assumed to have an introductory knowledge of the basics. The topics covered are: (1) Panofsky-Wenzel and Planar Wake Theorems; (2) Echo Effect; (3) Crystalline Beam; (4) Fast Ion Instability; (5) Lawson-Woodward Theorem and Laser Acceleration in Free Space; (6) Spin Dynamics and Siberian Snakes; (7) Symplectic Approximation of Maps; (8) Truncated Power Series Algebra; and (9) Lie Algebra Technique for nonlinear Dynamics. The purpose of these lectures is not to elaborate, but to prepare the students so that they can do their own research. Each topic can be read independently of the others.

  8. Lecture Notes on Topics in Accelerator Physics

    International Nuclear Information System (INIS)

    Chao, Alex W.

    2002-01-01

    These are lecture notes that cover a selection of topics, some of them under current research, in accelerator physics. I try to derive the results from first principles, although the students are assumed to have an introductory knowledge of the basics. The topics covered are: (1) Panofsky-Wenzel and Planar Wake Theorems; (2) Echo Effect; (3) Crystalline Beam; (4) Fast Ion Instability; (5) Lawson-Woodward Theorem and Laser Acceleration in Free Space; (6) Spin Dynamics and Siberian Snakes; (7) Symplectic Approximation of Maps; (8) Truncated Power Series Algebra; and (9) Lie Algebra Technique for nonlinear Dynamics. The purpose of these lectures is not to elaborate, but to prepare the students so that they can do their own research. Each topic can be read independently of the others

  9. Risk assessment of topically applied products

    DEFF Research Database (Denmark)

    Søborg, Tue; Basse, Line Hollesen; Halling-Sørensen, Bent

    2007-01-01

    The human risk of harmful substances in semisolid topical dosage forms applied topically to normal skin and broken skin, respectively, was assessed. Bisphenol A diglycidyl ether (BADGE) and three derivatives of BADGE previously quantified in aqueous cream and the UV filters 3-BC and 4-MBC were used...... as model compounds. Tolerable daily intake (TDI) values have been established for BADGE and derivatives. Endocrine disruption was chosen as endpoint for 3-BC and 4-MBC. Skin permeation of the model compounds was investigated in vitro using pig skin membranes. Tape stripping was applied to simulate broken...... parameters for estimating the risk. The immediate human risk of BADGE and derivatives in topical dosage forms was found to be low. However, local treatment of broken skin may lead to higher exposure of BADGE and derivatives compared to application to normal skin. 3-BC permeated skin at higher flux than 4-MBC...

  10. Getting Started with Topic Modeling and MALLET

    Directory of Open Access Journals (Sweden)

    Shawn Graham

    2012-09-01

    Full Text Available In this lesson you will first learn what topic modeling is and why you might want to employ it in your research. You will then learn how to install and work with the MALLET natural language processing toolkit to do so. MALLET involves modifying an environment variable (essentially, setting up a short-cut so that your computer always knows where to find the MALLET program and working with the command line (ie, by typing in commands manually, rather than clicking on icons or menus. We will run the topic modeller on some example files, and look at the kinds of outputs that MALLET installed. This will give us a good idea of how it can be used on a corpus of texts to identify topics found in the documents without reading them individually.

  11. Topical minoxidil: cardiac effects in bald man.

    Science.gov (United States)

    Leenen, F H; Smith, D L; Unger, W P

    1988-01-01

    Systemic cardiovascular effects during chronic treatment with topical minoxidil vs placebo were evaluated using a double-blind, randomized design for two parallel groups (n = 20 for minoxidil, n = 15 for placebo). During 6 months of follow-up, blood pressure did not change, whereas minoxidil increased heart rate by 3-5 beats min-1. Compared with placebo, topical minoxidil caused significant increases in LV end-diastolic volume, in cardiac output (by 0.751 min-1) and in LV mass (by 5 g m-2). We conclude that in healthy subjects short-term use of topical minoxidil is likely not to be detrimental. However, safety needs to be established regarding ischaemic symptoms in patients with coronary artery disease as well as for the possible development of LV hypertrophy in healthy subjects during years of therapy. PMID:3191000

  12. Recent topics in NMR imaging and MRI

    International Nuclear Information System (INIS)

    Watanabe, Tokuko

    2002-01-01

    NMR and NMR imaging (MRI) are finding increasing use not only in the clinical and medical fields, but also in material, physicochemical, biological, geological, industrial and environmental applications. This short review is limited to two topics: new techniques and pulse sequences and their application to non-clinical fields that may have clinical application; and new trends in MR contrast agents. The former topic addresses pulse sequence and data analysis; dynamics such as diffusion, flow, velocity and velocimetry; chemometrics; pharmacological agents; and chemotherapy; the latter topic addresses contrast agents (CA) sensitive to biochemical activity; CA based on water exchange; molecular interactions and stability of CA; characteristics of emerging CA; superparamagnetic CA; and macromolecular CA. (author)

  13. Topical thrombin-related corneal calcification.

    Science.gov (United States)

    Kiratli, Hayyam; Irkeç, Murat; Alaçal, Sibel; Söylemezoğlu, Figen

    2006-09-01

    To report a highly unusual case of corneal calcification after brief intraoperative use of topical thrombin. A 44-year-old man underwent sclerouvectomy for ciliochoroidal leiomyoma, during which 35 UNIH/mL lyophilized bovine thrombin mixed with 9 mL of diluent containing 1500 mmol/mL calcium chloride was used. From the first postoperative day, corneal and anterior lenticular capsule calcifications developed, and corneal involvement slightly enlarged thereafter. A year later, 2 corneal punch biopsies confirmed calcification mainly in the Bowman layer. Topical treatment with 1.5% ethylenediaminetetraacetic acid significantly restored corneal clarity. Six months later, a standard extracapsular cataract extraction with intraocular lens placement improved visual acuity to 20/60. This case suggests that topical thrombin drops with elevated calcium concentrations may cause acute corneal calcification in Bowman layer and on the anterior lens capsule.

  14. Systemic vs. Topical Therapy for the Treatment of Vulvovaginal Candidiasis

    Directory of Open Access Journals (Sweden)

    Sebastian Faro

    1994-01-01

    Full Text Available It is estimated that 75% of all women will experience at least 1 episode of vulvovaginal candidiasis (VVC during their lifetimes. Most patients with acute VVC can be treated with short-term regimens that optimize compliance. Since current topical and oral antifungals have shown comparably high efficacy rates, other issues should be considered in determining the most appropriate therapy. It is possible that the use of short-duration narrow-spectrum agents may increase selection of more resistant organisms which will result in an increase of recurrent VVC (RVVC. Women who are known or suspected to be pregnant and women of childbearing age who are not using a reliable means of contraception should receive topical therapy, as should those who are breast-feeding or receiving drugs that can interact with an oral azole and those who have previously experienced adverse effects during azole therapy. Because of the potential risks associated with systemic treatment, topical therapy with a broad-spectrum agent should be the method of choice for VVC, whereas systemic therapy should be reserved for either RVVC or cases where the benefits outweigh any possible adverse reactions.

  15. Topics in current aerosol research (part2)

    CERN Document Server

    Hidy, G M

    1972-01-01

    Topics in Current Aerosol Research, Part 2 contains some selected articles in the field of aerosol study. The chosen topics deal extensively with the theory of diffusiophoresis and thermophoresis. Also covered in the book is the mathematical treatment of integrodifferential equations originating from the theory of aerosol coagulation. The book is the third volume of the series entitled International Reviews in Aerosol Physics and Chemistry. The text offers significant understanding of the methods employed to develop a theory for thermophoretic and diffusiophoretic forces acting on spheres in t

  16. Conference on Stochastic Analysis and Related Topics

    CERN Document Server

    Peterson, Jonathon

    2017-01-01

    The articles in this collection are a sampling of some of the research presented during the conference “Stochastic Analysis and Related Topics”, held in May of 2015 at Purdue University in honor of the 60th birthday of Rodrigo Bañuelos. A wide variety of topics in probability theory is covered in these proceedings, including heat kernel estimates, Malliavin calculus, rough paths differential equations, Lévy processes, Brownian motion on manifolds, and spin glasses, among other topics.

  17. Applied atomic and collision physics special topics

    CERN Document Server

    Massey, H S W; Bederson, Benjamin

    1982-01-01

    Applied Atomic Collision Physics, Volume 5: Special Topics deals with topics on applications of atomic collisions that were not covered in the first four volumes of the treatise. The book opens with a chapter on ultrasensitive chemical detectors. This is followed by separate chapters on lighting, magnetohydrodynamic electrical power generation, gas breakdown and high voltage insulating gases, thermionic energy converters, and charged particle detectors. Subsequent chapters deal with the operation of multiwire drift and proportional chambers and streamer chambers and their use in high energy p

  18. Radiation risks -a possible teaching topic

    International Nuclear Information System (INIS)

    Howes, R.W.

    1975-01-01

    Radiation risks has been the subject of hot debate since 1969 due in main to the energy crisis and the switch to nuclear power. Topics of this debate including; the controversy concerned with the late radiobiological effects of low level radiation, the social responsibility of modern scientists, the sometimes acrimonious discussion which has taken place over many years concerning radiation standards, and present day misgivings over the environmental aspect of the nuclear power programme, are discussed and suggestions are made of ways in which the topics could be introduced into teaching courses. (U.K.)

  19. Topics in the Foundations of General Relativity and Newtonian Gravitation Theory

    CERN Document Server

    Malament, David B

    2012-01-01

    In Topics in the Foundations of General Relativity and Newtonian Gravitation Theory, David B. Malament presents the basic logical-mathematical structure of general relativity and considers a number of special topics concerning the foundations of general relativity and its relation to Newtonian gravitation theory. These special topics include the geometrized formulation of Newtonian theory (also known as Newton-Cartan theory), the concept of rotation in general relativity, and Gödel spacetime. One of the highlights of the book is a no-go theorem that can be understood to show that there is

  20. System Reliability Analysis Considering Correlation of Performances

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Saekyeol; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of); Lim, Woochul [Mando Corporation, Seongnam (Korea, Republic of)

    2017-04-15

    Reliability analysis of a mechanical system has been developed in order to consider the uncertainties in the product design that may occur from the tolerance of design variables, uncertainties of noise, environmental factors, and material properties. In most of the previous studies, the reliability was calculated independently for each performance of the system. However, the conventional methods cannot consider the correlation between the performances of the system that may lead to a difference between the reliability of the entire system and the reliability of the individual performance. In this paper, the joint probability density function (PDF) of the performances is modeled using a copula which takes into account the correlation between performances of the system. The system reliability is proposed as the integral of joint PDF of performances and is compared with the individual reliability of each performance by mathematical examples and two-bar truss example.