WorldWideScience

Sample records for supercomputers topics considered

  1. Supercomputational science

    CERN Document Server

    Wilson, S

    1990-01-01

    In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School ...

  2. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  3. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  4. Energy sciences supercomputing 1990

    Energy Technology Data Exchange (ETDEWEB)

    Mirin, A.A.; Kaiper, G.V. (eds.)

    1990-01-01

    This report contains papers on the following topics: meeting the computational challenge; lattice gauge theory: probing the standard model; supercomputing for the superconducting super collider; and overview of ongoing studies in climate model diagnosis and intercomparison; MHD simulation of the fueling of a tokamak fusion reactor through the injection of compact toroids; gyrokinetic particle simulation of tokamak plasmas; analyzing chaos: a visual essay in nonlinear dynamics; supercomputing and research in theoretical chemistry; monte carlo simulations of light nuclei; parallel processing; and scientists of the future: learning by doing.

  5. Computational Dimensionalities of Global Supercomputing

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2013-12-01

    Full Text Available This Invited Paper pertains to subject of my Plenary Keynote Speech at the 17th World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI 2013 held in Orlando, Florida on July 9-12, 2013. The title of my Plenary Keynote Speech was: "Dimensionalities of Computation: from Global Supercomputing to Data, Text and Web Mining" but this Invited Paper will focus only on the "Computational Dimensionalities of Global Supercomputing" and is based upon a summary of the contents of several individual articles that have been previously written with myself as lead author and published in [75], [76], [77], [78], [79], [80] and [11]. The topics of these of the Plenary Speech included Overview of Current Research in Global Supercomputing [75], Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing [76], Data Mining Supercomputing with SAS™ JMP® Genomics ([77], [79], [80], and Visualization by Supercomputing Data Mining [81]. ______________________ [11.] Committee on the Future of Supercomputing, National Research Council (2003, The Future of Supercomputing: An Interim Report, ISBN-13: 978-0-309-09016- 2, http://www.nap.edu/catalog/10784.html [75.] Segall, Richard S.; Zhang, Qingyu and Cook, Jeffrey S.(2013, "Overview of Current Research in Global Supercomputing", Proceedings of Forty- Fourth Meeting of Southwest Decision Sciences Institute (SWDSI, Albuquerque, NM, March 12-16, 2013. [76.] Segall, Richard S. and Zhang, Qingyu (2010, "Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing", Proceedings of 5th INFORMS Workshop on Data Mining and Health Informatics, Austin, TX, November 6, 2010. [77.] Segall, Richard S., Zhang, Qingyu and Pierce, Ryan M.(2010, "Data Mining Supercomputing with SAS™ JMP®; Genomics: Research-in-Progress, Proceedings of 2010 Conference on Applied Research in Information Technology, sponsored by

  6. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee

    2011-11-15

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  7. Emerging supercomputer architectures

    Energy Technology Data Exchange (ETDEWEB)

    Messina, P.C.

    1987-01-01

    This paper will examine the current and near future trends for commercially available high-performance computers with architectures that differ from the mainstream ''supercomputer'' systems in use for the last few years. These emerging supercomputer architectures are just beginning to have an impact on the field of high performance computing. 7 refs., 1 tab.

  8. 75 FR 47310 - Solicitation for Nominations for New Clinical Preventive Health Topics To Be Considered for...

    Science.gov (United States)

    2010-08-05

    ... disease; injury and violence-related disorders; infectious diseases; mental disorders and substance abuse; metabolic, nutritional and endocrine diseases; musculoskeletal conditions; obstetric and gynecological conditions; pediatric disorders; and, vision and hearing disorders). Selection of suggested topics will be...

  9. NSF Commits to Supercomputers.

    Science.gov (United States)

    Waldrop, M. Mitchell

    1985-01-01

    The National Science Foundation (NSF) has allocated at least $200 million over the next five years to support four new supercomputer centers. Issues and trends related to this NSF initiative are examined. (JN)

  10. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    Science.gov (United States)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  11. Supercomputers to transform Science

    CERN Multimedia

    2006-01-01

    "New insights into the structure of space and time, climate modeling, and the design of novel drugs, are but a few of the many research areas that will be transforned by the installation of three supercomputers at the Unversity of Bristol." (1/2 page)

  12. Petaflop supercomputers of China

    Institute of Scientific and Technical Information of China (English)

    Guoliang CHEN

    2010-01-01

    @@ After ten years of development, high performance computing (HPC) in China has made remarkable progress. In November, 2010, the NUDT Tianhe-1A and the Dawning Nebulae respectively claimed the 1st and 3rd places in the Top500 Supercomputers List; this recognizes internationally the level that China has achieved in high performance computer manufacturing.

  13. Introduction to Reconfigurable Supercomputing

    CERN Document Server

    Lanzagorta, Marco; Rosenberg, Robert

    2010-01-01

    This book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPe who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigur

  14. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  15. Ultrascalable petaflop parallel supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A. (Ridgefield, CT); Chen, Dong (Croton On Hudson, NY); Chiu, George (Cross River, NY); Cipolla, Thomas M. (Katonah, NY); Coteus, Paul W. (Yorktown Heights, NY); Gara, Alan G. (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Hall, Shawn (Pleasantville, NY); Haring, Rudolf A. (Cortlandt Manor, NY); Heidelberger, Philip (Cortlandt Manor, NY); Kopcsay, Gerard V. (Yorktown Heights, NY); Ohmacht, Martin (Yorktown Heights, NY); Salapura, Valentina (Chappaqua, NY); Sugavanam, Krishnan (Mahopac, NY); Takken, Todd (Brewster, NY)

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  16. Microprocessors: from desktops to supercomputers.

    Science.gov (United States)

    Baskett, F; Hennessy, J L

    1993-08-13

    Continuing improvements in integrated circuit technology and computer architecture have driven microprocessors to performance levels that rival those of supercomputers-at a fraction of the price. The use of sophisticated memory hierarchies enables microprocessor-based machines to have very large memories built from commodity dynamic random access memory while retaining the high bandwidth and low access time needed in a high-performance machine. Parallel processors composed of these high-performance microprocessors are becoming the supercomputing technology of choice for scientific and engineering applications. The challenges for these new supercomputers have been in developing multiprocessor architectures that are easy to program and that deliver high performance without extraordinary programming efforts by users. Recent progress in multiprocessor architecture has led to ways to meet these challenges.

  17. World's fastest supercomputer opens up to users

    Science.gov (United States)

    Xin, Ling

    2016-08-01

    China's latest supercomputer - Sunway TaihuLight - has claimed the crown as the world's fastest computer according to the latest TOP500 list, released at the International Supercomputer Conference in Frankfurt in late June.

  18. Improved Access to Supercomputers Boosts Chemical Applications.

    Science.gov (United States)

    Borman, Stu

    1989-01-01

    Supercomputing is described in terms of computing power and abilities. The increase in availability of supercomputers for use in chemical calculations and modeling are reported. Efforts of the National Science Foundation and Cray Research are highlighted. (CW)

  19. Desktop supercomputers. Advance medical imaging.

    Science.gov (United States)

    Frisiello, R S

    1991-02-01

    Medical imaging tools that radiologists as well as a wide range of clinicians and healthcare professionals have come to depend upon are emerging into the next phase of functionality. The strides being made in supercomputing technologies--including reduction of size and price--are pushing medical imaging to a new level of accuracy and functionality.

  20. Supercomputing '91; Proceedings of the 4th Annual Conference on High Performance Computing, Albuquerque, NM, Nov. 18-22, 1991

    Science.gov (United States)

    1991-01-01

    Various papers on supercomputing are presented. The general topics addressed include: program analysis/data dependence, memory access, distributed memory code generation, numerical algorithms, supercomputer benchmarks, latency tolerance, parallel programming, applications, processor design, networks, performance tools, mapping and scheduling, characterization affecting performance, parallelism packaging, computing climate change, combinatorial algorithms, hardware and software performance issues, system issues. (No individual items are abstracted in this volume)

  1. Will Your Next Supercomputer Come from Costco?

    Energy Technology Data Exchange (ETDEWEB)

    Farber, Rob

    2007-04-15

    A fun topic for April, one that is not an April fool’s joke, is that you can purchase a commodity 200+ Gflop (single-precision) Linux supercomputer for around $600 from your favorite electronic vendor. Yes, it’s true. Just walk in and ask for a Sony Playstation 3 (PS3), take it home and install Linux on it. IBM has provided an excellent tutorial for installing Linux and building applications at http://www-128.ibm.com/developerworks/power/library/pa-linuxps3-1. If you want to raise some eyebrows at work, then submit a purchase request for a Sony PS3 game console and watch the reactions as your paperwork wends its way through the procurement process.

  2. Supercomputer debugging workshop `92

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.S.

    1993-02-01

    This report contains papers or viewgraphs on the following topics: The ABCs of Debugging in the 1990s; Cray Computer Corporation; Thinking Machines Corporation; Cray Research, Incorporated; Sun Microsystems, Inc; Kendall Square Research; The Effects of Register Allocation and Instruction Scheduling on Symbolic Debugging; Debugging Optimized Code: Currency Determination with Data Flow; A Debugging Tool for Parallel and Distributed Programs; Analyzing Traces of Parallel Programs Containing Semaphore Synchronization; Compile-time Support for Efficient Data Race Detection in Shared-Memory Parallel Programs; Direct Manipulation Techniques for Parallel Debuggers; Transparent Observation of XENOOPS Objects; A Parallel Software Monitor for Debugging and Performance Tools on Distributed Memory Multicomputers; Profiling Performance of Inter-Processor Communications in an iWarp Torus; The Application of Code Instrumentation Technology in the Los Alamos Debugger; and CXdb: The Road to Remote Debugging.

  3. An assessment of worldwide supercomputer usage

    Energy Technology Data Exchange (ETDEWEB)

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  4. TOP500 Supercomputers for June 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  5. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  6. Comparing Clusters and Supercomputers for Lattice QCD

    CERN Document Server

    Gottlieb, S

    2001-01-01

    Since the development of the Beowulf project to build a parallel computer from commodity PC components, there have been many such clusters built. The MILC QCD code has been run on a variety of clusters and supercomputers. Key design features are identified, and the cost effectiveness of clusters and supercomputers are compared.

  7. Low Cost Supercomputer for Applications in Physics

    Science.gov (United States)

    Ahmed, Maqsood; Ahmed, Rashid; Saeed, M. Alam; Rashid, Haris; Fazal-e-Aleem

    2007-02-01

    Using parallel processing technique and commodity hardware, Beowulf supercomputers can be built at a much lower cost. Research organizations and educational institutions are using this technique to build their own high performance clusters. In this paper we discuss the architecture and design of Beowulf supercomputer and our own experience of building BURRAQ cluster.

  8. TOP500 Supercomputers for June 2005

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  9. TOP500 Supercomputers for November 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  10. 16 million [pounds] investment for 'virtual supercomputer'

    CERN Multimedia

    Holland, C

    2003-01-01

    "The Particle Physics and Astronomy Research Council is to spend 16million [pounds] to create a massive computing Grid, equivalent to the world's second largest supercomputer after Japan's Earth Simulator computer" (1/2 page)

  11. Supercomputers open window of opportunity for nursing.

    Science.gov (United States)

    Meintz, S L

    1993-01-01

    A window of opportunity was opened for nurse researchers with the High Performance Computing and Communications (HPCC) initiative in President Bush's 1992 fiscal-year budget. Nursing research moved into the high-performance computing environment through the University of Nevada Las Vegas/Cray Project for Nursing and Health Data Research (PNHDR). USing the CRAY YMP 2/216 supercomputer, the PNHDR established the validity of a supercomputer platform for nursing research. In addition, the research has identified a paradigm shift in statistical analysis, delineated actual and potential barriers to nursing research in a supercomputing environment, conceptualized a new branch of nursing science called Nurmetrics, and discovered new avenue for nursing research utilizing supercomputing tools.

  12. TOP500 Supercomputers for November 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  13. Misleading Performance Reporting in the Supercomputing Field

    Directory of Open Access Journals (Sweden)

    David H. Bailey

    1992-01-01

    Full Text Available In a previous humorous note, I outlined 12 ways in which performance figures for scientific supercomputers can be distorted. In this paper, the problem of potentially misleading performance reporting is discussed in detail. Included are some examples that have appeared in recent published scientific papers. This paper also includes some proposed guidelines for reporting performance, the adoption of which would raise the level of professionalism and reduce the level of confusion in the field of supercomputing.

  14. Simulating Galactic Winds on Supercomputers

    Science.gov (United States)

    Schneider, Evan

    2017-01-01

    Galactic winds are a ubiquitous feature of rapidly star-forming galaxies. Observations of nearby galaxies have shown that winds are complex, multiphase phenomena, comprised of outflowing gas at a large range of densities, temperatures, and velocities. Describing how starburst-driven outflows originate, evolve, and affect the circumgalactic medium and gas supply of galaxies is an important challenge for theories of galaxy evolution. In this talk, I will discuss how we are using a new hydrodynamics code, Cholla, to improve our understanding of galactic winds. Cholla is a massively parallel, GPU-based code that takes advantage of specialized hardware on the newest generation of supercomputers. With Cholla, we can perform large, three-dimensional simulations of multiphase outflows, allowing us to track the coupling of mass and momentum between gas phases across hundreds of parsecs at sub-parsec resolution. The results of our recent simulations demonstrate that the evolution of cool gas in galactic winds is highly dependent on the initial structure of embedded clouds. In particular, we find that turbulent density structures lead to more efficient mass transfer from cool to hot phases of the wind. I will discuss the implications of our results both for the incorporation of winds into cosmological simulations, and for interpretations of observed multiphase winds and the circumgalatic medium of nearby galaxies.

  15. Guide to dataflow supercomputing basic concepts, case studies, and a detailed example

    CERN Document Server

    Milutinovic, Veljko; Trifunovic, Nemanja; Giorgi, Roberto

    2015-01-01

    This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; revie

  16. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy; Katz, Daniel S.; Binkowski, T. Andrew; Zhong, Xiaoliang; Heinonen, Olle; Karpeyev, Dmitry; Wilde, Michael

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt's sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.

  17. GREEN SUPERCOMPUTING IN A DESKTOP BOX

    Energy Technology Data Exchange (ETDEWEB)

    HSU, CHUNG-HSING [Los Alamos National Laboratory; FENG, WU-CHUN [NON LANL; CHING, AVERY [NON LANL

    2007-01-17

    The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicated computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.

  18. A training program for scientific supercomputing users

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  19. TOP500 Supercomputers for June 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  20. TOP500 Supercomputers for June 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  1. TOP500 Supercomputers for November 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-11-15

    20th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 20th edition of the TOP500 list of the world's fastest supercomputers was released today (November 15, 2002). The Earth Simulator supercomputer installed earlier this year at the Earth Simulator Center in Yokohama, Japan, is with its Linpack benchmark performance of 35.86 Tflop/s (trillions of calculations per second) retains the number one position. The No.2 and No.3 positions are held by two new, identical ASCI Q systems at Los Alamos National Laboratory (7.73Tflop/s each). These systems are built by Hewlett-Packard and based on the Alpha Server SC computer system.

  2. Input/output behavior of supercomputing applications

    Science.gov (United States)

    Miller, Ethan L.

    1991-01-01

    The collection and analysis of supercomputer I/O traces and their use in a collection of buffering and caching simulations are described. This serves two purposes. First, it gives a model of how individual applications running on supercomputers request file system I/O, allowing system designer to optimize I/O hardware and file system algorithms to that model. Second, the buffering simulations show what resources are needed to maximize the CPU utilization of a supercomputer given a very bursty I/O request rate. By using read-ahead and write-behind in a large solid stated disk, one or two applications were sufficient to fully utilize a Cray Y-MP CPU.

  3. GPUs: An Oasis in the Supercomputing Desert

    CERN Document Server

    Kamleh, Waseem

    2012-01-01

    A novel metric is introduced to compare the supercomputing resources available to academic researchers on a national basis. Data from the supercomputing Top 500 and the top 500 universities in the Academic Ranking of World Universities (ARWU) are combined to form the proposed "500/500" score for a given country. Australia scores poorly in the 500/500 metric when compared with other countries with a similar ARWU ranking, an indication that HPC-based researchers in Australia are at a relative disadvantage with respect to their overseas competitors. For HPC problems where single precision is sufficient, commodity GPUs provide a cost-effective means of quenching the computational thirst of otherwise parched Lattice practitioners traversing the Australian supercomputing desert. We explore some of the more difficult terrain in single precision territory, finding that BiCGStab is unreliable in single precision at large lattice sizes. We test the CGNE and CGNR forms of the conjugate gradient method on the normal equa...

  4. Floating point arithmetic in future supercomputers

    Science.gov (United States)

    Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.

    1989-01-01

    Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.

  5. Supercomputer debugging workshop '92

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.S.

    1993-01-01

    This report contains papers or viewgraphs on the following topics: The ABCs of Debugging in the 1990s; Cray Computer Corporation; Thinking Machines Corporation; Cray Research, Incorporated; Sun Microsystems, Inc; Kendall Square Research; The Effects of Register Allocation and Instruction Scheduling on Symbolic Debugging; Debugging Optimized Code: Currency Determination with Data Flow; A Debugging Tool for Parallel and Distributed Programs; Analyzing Traces of Parallel Programs Containing Semaphore Synchronization; Compile-time Support for Efficient Data Race Detection in Shared-Memory Parallel Programs; Direct Manipulation Techniques for Parallel Debuggers; Transparent Observation of XENOOPS Objects; A Parallel Software Monitor for Debugging and Performance Tools on Distributed Memory Multicomputers; Profiling Performance of Inter-Processor Communications in an iWarp Torus; The Application of Code Instrumentation Technology in the Los Alamos Debugger; and CXdb: The Road to Remote Debugging.

  6. Adventures in Supercomputing: An innovative program

    Energy Technology Data Exchange (ETDEWEB)

    Summers, B.G.; Hicks, H.R.; Oliver, C.E.

    1995-06-01

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology and serve as a spur to systemic reform. The Adventures in Supercomputing (AiS) program, sponsored by the Department of Energy, is such a program. Adventures in Supercomputing is a program for high school and middle school teachers. It has helped to change the teaching paradigm of many of the teachers involved in the program from a teacher-centered classroom to a student-centered classroom. ``A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode``. Not only is the process of teaching changed, but evidences of systemic reform are beginning to surface. After describing the program, the authors discuss the teaching strategies being used and the evidences of systemic change in many of the AiS schools in Tennessee.

  7. Data-intensive computing on numerically-insensitive supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Ahrens, James P [Los Alamos National Laboratory; Fasel, Patricia K [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Heitmann, Katrin [Los Alamos National Laboratory; Lo, Li - Ta [Los Alamos National Laboratory; Patchett, John M [Los Alamos National Laboratory; Williams, Sean J [Los Alamos National Laboratory; Woodring, Jonathan L [Los Alamos National Laboratory; Wu, Joshua [Los Alamos National Laboratory; Hsu, Chung - Hsing [ONL

    2010-12-03

    With the advent of the era of petascale supercomputing, via the delivery of the Roadrunner supercomputing platform at Los Alamos National Laboratory, there is a pressing need to address the problem of visualizing massive petascale-sized results. In this presentation, I discuss progress on a number of approaches including in-situ analysis, multi-resolution out-of-core streaming and interactive rendering on the supercomputing platform. These approaches are placed in context by the emerging area of data-intensive supercomputing.

  8. Parallel supercomputers for lattice gauge theory.

    Science.gov (United States)

    Brown, F R; Christ, N H

    1988-03-18

    During the past 10 years, particle physicists have increasingly employed numerical simulation to answer fundamental theoretical questions about the properties of quarks and gluons. The enormous computer resources required by quantum chromodynamic calculations have inspired the design and construction of very powerful, highly parallel, dedicated computers optimized for this work. This article gives a brief description of the numerical structure and current status of these large-scale lattice gauge theory calculations, with emphasis on the computational demands they make. The architecture, present state, and potential of these special-purpose supercomputers is described. It is argued that a numerical solution of low energy quantum chromodynamics may well be achieved by these machines.

  9. Modeling the weather with a data flow supercomputer

    Science.gov (United States)

    Dennis, J. B.; Gao, G.-R.; Todd, K. W.

    1984-01-01

    A static concept of data flow architecture is considered for a supercomputer for weather modeling. The machine level instructions are loaded into specific memory locations before computation is initiated, with only one instruction active at a time. The machine would have processing element, functional unit, array memory, memory routing and distribution routing network elements all contained on microprocessors. A value-oriented algorithmic language (VAL) would be employed and would have, as basic operations, simple functions deriving results from operand values. Details of the machine language format, computations with an array and file processing procedures are outlined. A global weather model is discussed in terms of a static architecture and the potential computation rate is analyzed. The results indicate that detailed design studies are warranted to quantify costs and parts fabrication requirements.

  10. Supercomputing Centers and Electricity Service Providers

    DEFF Research Database (Denmark)

    Patki, Tapasya; Bates, Natalie; Ghatikar, Girish

    2016-01-01

    Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates this pro......Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates...... from a detailed, quantitative survey-based analysis and compare the perspectives of the European grid and SCs to the ones of the United States (US). We then show that contrary to the expectation, SCs in the US are more open toward cooperating and developing demand-management strategies with their ESPs...... (LRZ). We conclude that perspectives on demand management are dependent on the electricity market and pricing in the geographical region and on the degree of control that a particular SC has in terms of power-purchase negotiation....

  11. Multi-petascale highly efficient parallel supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O' Brien, John K.; O' Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  12. A workbench for tera-flop supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Resch, M.M.; Kuester, U.; Mueller, M.S.; Lang, U. [High Performance Computing Center Stuttgart (HLRS), Stuttgart (Germany)

    2003-07-01

    Supercomputers currently reach a peak performance in the range of TFlop/s. With but one exception - the Japanese Earth Simulator - none of these systems has so far been able to also show a level of sustained performance for a variety of applications that comes close to the peak performance. Sustained TFlop/s are therefore rarely seen. The reasons are manifold and are well known: Bandwidth and latency both for main memory and for the internal network are the key internal technical problems. Cache hierarchies with large caches can bring relief but are no remedy to the problem. However, there are not only technical problems that inhibit the full exploitation by scientists of the potential of modern supercomputers. More and more organizational issues come to the forefront. This paper shows the approach of the High Performance Computing Center Stuttgart (HLRS) to deliver a sustained performance of TFlop/s for a wide range of applications from a large group of users spread over Germany. The core of the concept is the role of the data. Around this we design a simulation workbench that hides the complexity of interacting computers, networks and file systems from the user. (authors)

  13. Seismic signal processing on heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas

    2015-04-01

    The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that

  14. Most Social Scientists Shun Free Use of Supercomputers.

    Science.gov (United States)

    Kiernan, Vincent

    1998-01-01

    Social scientists, who frequently complain that the federal government spends too little on them, are passing up what scholars in the physical and natural sciences see as the government's best give-aways: free access to supercomputers. Some social scientists say the supercomputers are difficult to use; others find desktop computers provide…

  15. Supercomputing - Use Cases, Advances, The Future (2/2)

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the second day, we will focus on software and software paradigms driving supercomputers, workloads that need supercomputing treatment, advances in technology and possible future developments. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and i...

  16. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  17. Multiprocessing on supercomputers for computational aerodynamics

    Science.gov (United States)

    Yarrow, Maurice; Mehta, Unmeel B.

    1991-01-01

    Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.

  18. The PMS project Poor Man's Supercomputer

    CERN Document Server

    Csikor, Ferenc; Hegedüs, P; Horváth, V K; Katz, S D; Piróth, A

    2001-01-01

    We briefly describe the Poor Man's Supercomputer (PMS) project that is carried out at Eotvos University, Budapest. The goal is to develop a cost effective, scalable, fast parallel computer to perform numerical calculations of physical problems that can be implemented on a lattice with nearest neighbour interactions. To reach this goal we developed the PMS architecture using PC components and designed a special, low cost communication hardware and the driver software for Linux OS. Our first implementation of the PMS includes 32 nodes (PMS1). The performance of the PMS1 was tested by Lattice Gauge Theory simulations. Using SU(3) pure gauge theory or bosonic MSSM on the PMS1 computer we obtained 3$/Mflops price-per-sustained performance ratio. The design of the special hardware and the communication driver are freely available upon request for non-profit organizations.

  19. The BlueGene/L Supercomputer

    CERN Document Server

    Bhanot, G V; Gara, A; Vranas, P M; Bhanot, Gyan; Chen, Dong; Gara, Alan; Vranas, Pavlos

    2002-01-01

    The architecture of the BlueGene/L massively parallel supercomputer is described. Each computing node consists of a single compute ASIC plus 256 MB of external memory. The compute ASIC integrates two 700 MHz PowerPC 440 integer CPU cores, two 2.8 Gflops floating point units, 4 MB of embedded DRAM as cache, a memory controller for external memory, six 1.4 Gbit/s bi-directional ports for a 3-dimensional torus network connection, three 2.8 Gbit/s bi-directional ports for connecting to a global tree network and a Gigabit Ethernet for I/O. 65,536 of such nodes are connected into a 3-d torus with a geometry of 32x32x64. The total peak performance of the system is 360 Teraflops and the total amount of memory is 16 TeraBytes.

  20. Explaining the Gap between Theoretical Peak Performance and Real Performance for Supercomputer Architectures

    Directory of Open Access Journals (Sweden)

    W. Schönauer

    1994-01-01

    Full Text Available The basic architectures of vector and parallel computers and their properties are presented followed by a discussion of memory size and arithmetic operations in the context of memory bandwidth. For a single operation micromeasurements of the vector triad for the IBM 3090 VF and the CRAY Y-MP/8 are presented, revealing in detail the losses for this operation. The global performance of a whole supercomputer is then considered by identifying reduction factors that reduce the theoretical peak performance to the poor real performance. The responsibilities of the manufacturer and of the user for these losses are discussed. The price-performance ratio for different architectures as of January 1991 is briefly mentioned. Finally a user-friendly architecture for a supercomputer is proposed.

  1. World's biggest 'virtual supercomputer' given the go-ahead

    CERN Multimedia

    2003-01-01

    "The Particle Physics and Astronomy Research Council has today announced GBP 16 million to create a massive computing Grid, equivalent to the world's second largest supercomputer after Japan's Earth Simulator computer" (1 page).

  2. Numerical infinities and infinitesimals in a new supercomputing framework

    Science.gov (United States)

    Sergeyev, Yaroslav D.

    2016-06-01

    Traditional computers are able to work numerically with finite numbers only. The Infinity Computer patented recently in USA and EU gets over this limitation. In fact, it is a computational device of a new kind able to work numerically not only with finite quantities but with infinities and infinitesimals, as well. The new supercomputing methodology is not related to non-standard analysis and does not use either Cantor's infinite cardinals or ordinals. It is founded on Euclid's Common Notion 5 saying `The whole is greater than the part'. This postulate is applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). It is shown that it becomes possible to write down finite, infinite, and infinitesimal numbers by a finite number of symbols as numerals belonging to a positional numeral system with an infinite radix described by a specific ad hoc introduced axiom. Numerous examples of the usage of the introduced computational tools are given during the lecture. In particular, algorithms for solving optimization problems and ODEs are considered among the computational applications of the Infinity Computer. Numerical experiments executed on a software prototype of the Infinity Computer are discussed.

  3. Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    Science.gov (United States)

    Fluke, Christopher J.; Barnes, David G.; Barsdell, Benjamin R.; Hassan, Amr H.

    2011-01-01

    General-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.

  4. Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    CERN Document Server

    Fluke, Christopher J; Barsdell, Benjamin R; Hassan, Amr H

    2010-01-01

    General purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplyfing the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best-practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks, and make the investment of time and effort to become early adopters of GPGPU in astronomy, s...

  5. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  6. Taking ASCI supercomputing to the end game.

    Energy Technology Data Exchange (ETDEWEB)

    DeBenedictis, Erik P.

    2004-03-01

    The ASCI supercomputing program is broadly defined as running physics simulations on progressively more powerful digital computers. What happens if we extrapolate the computer technology to its end? We have developed a model for key ASCI computations running on a hypothetical computer whose technology is parameterized in ways that account for advancing technology. This model includes technology information such as Moore's Law for transistor scaling and developments in cooling technology. The model also includes limits imposed by laws of physics, such as thermodynamic limits on power dissipation, limits on cooling, and the limitation of signal propagation velocity to the speed of light. We apply this model and show that ASCI computations will advance smoothly for another 10-20 years to an 'end game' defined by thermodynamic limits and the speed of light. Performance levels at the end game will vary greatly by specific problem, but will be in the Exaflops to Zetaflops range for currently anticipated problems. We have also found an architecture that would be within a constant factor of giving optimal performance at the end game. This architecture is an evolutionary derivative of the mesh-connected microprocessor (such as ASCI Red Storm or IBM Blue Gene/L). We provide designs for the necessary enhancement to microprocessor functionality and the power-efficiency of both the processor and memory system. The technology we develop in the foregoing provides a 'perfect' computer model with which we can rate the quality of realizable computer designs, both in this writing and as a way of designing future computers. This report focuses on classical computers based on irreversible digital logic, and more specifically on algorithms that simulate space computing, irreversible logic, analog computers, and other ways to address stockpile stewardship that are outside the scope of this report.

  7. Simulating functional magnetic materials on supercomputers.

    Science.gov (United States)

    Gruner, Markus Ernst; Entel, Peter

    2009-07-22

    The recent passing of the petaflop per second landmark by the Roadrunner project at the Los Alamos National Laboratory marks a preliminary peak of an impressive world-wide development in the high-performance scientific computing sector. Also, purely academic state-of-the-art supercomputers such as the IBM Blue Gene/P at Forschungszentrum Jülich allow us nowadays to investigate large systems of the order of 10(3) spin polarized transition metal atoms by means of density functional theory. Three applications will be presented where large-scale ab initio calculations contribute to the understanding of key properties emerging from a close interrelation between structure and magnetism. The first two examples discuss the size dependent evolution of equilibrium structural motifs in elementary iron and binary Fe-Pt and Co-Pt transition metal nanoparticles, which are currently discussed as promising candidates for ultra-high-density magnetic data storage media. However, the preference for multiply twinned morphologies at smaller cluster sizes counteracts the formation of a single-crystalline L1(0) phase, which alone provides the required hard magnetic properties. The third application is concerned with the magnetic shape memory effect in the Ni-Mn-Ga Heusler alloy, which is a technologically relevant candidate for magnetomechanical actuators and sensors. In this material strains of up to 10% can be induced by external magnetic fields due to the field induced shifting of martensitic twin boundaries, requiring an extremely high mobility of the martensitic twin boundaries, but also the selection of the appropriate martensitic structure from the rich phase diagram.

  8. Supercomputing - Use Cases, Advances, The Future (1/2)

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the first day, we will focus on the history and theory of supercomputing, the top500 list and the hardware that makes supercomputers tick. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP an...

  9. An integrated distributed processing interface for supercomputers and workstations

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, J.; McGavran, L.

    1989-01-01

    Access to documentation, communication between multiple processes running on heterogeneous computers, and animation of simulations of engineering problems are typically weak in most supercomputer environments. This presentation will describe how we are improving this situation in the Computer Research and Applications group at Los Alamos National Laboratory. We have developed a tool using UNIX filters and a SunView interface that allows users simple access to documentation via mouse driven menus. We have also developed a distributed application that integrated a two point boundary value problem on one of our Cray Supercomputers. It is controlled and displayed graphically by a window interface running on a workstation screen. Our motivation for this research has been to improve the usual typewriter/static interface using language independent controls to show capabilities of the workstation/supercomputer combination. 8 refs.

  10. A New Hydrodynamic Model for Numerical Simulation of Interacting Galaxies on Intel Xeon Phi Supercomputers

    Science.gov (United States)

    Kulikov, Igor; Chernykh, Igor; Tutukov, Alexander

    2016-05-01

    This paper presents a new hydrodynamic model of interacting galaxies based on the joint solution of multicomponent hydrodynamic equations, first moments of the collisionless Boltzmann equation and the Poisson equation for gravity. Using this model, it is possible to formulate a unified numerical method for solving hyperbolic equations. This numerical method has been implemented for hybrid supercomputers with Intel Xeon Phi accelerators. The collision of spiral and disk galaxies considering the star formation process, supernova feedback and molecular hydrogen formation is shown as a simulation result.

  11. Bimatoprost Topical

    Science.gov (United States)

    Topical bimatoprost is used to treat hypotrichosis (less than the normal amount of hair) of the eyelashes by promoting ... growth of longer, thicker, and darker lashes. Topical bimatoprost is in a class of medications called prostaglandin ...

  12. Recent results from the Swinburne supercomputer software correlator

    Science.gov (United States)

    Tingay, Steven; et al.

    I will descrcibe the development of software correlators on the Swinburne Beowulf supercomputer and recent work using the Cray XD-1 machine. I will also describe recent Australian and global VLBI experiments that have been processed on the Swinburne software correlator, along with imaging results from these data. The role of the software correlator in Australia's eVLBI project will be discussed.

  13. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    Science.gov (United States)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  14. Access to Supercomputers. Higher Education Panel Report 69.

    Science.gov (United States)

    Holmstrom, Engin Inel

    This survey was conducted to provide the National Science Foundation with baseline information on current computer use in the nation's major research universities, including the actual and potential use of supercomputers. Questionnaires were sent to 207 doctorate-granting institutions; after follow-ups, 167 institutions (91% of the institutions…

  15. The Sky's the Limit When Super Students Meet Supercomputers.

    Science.gov (United States)

    Trotter, Andrew

    1991-01-01

    In a few select high schools in the U.S., supercomputers are allowing talented students to attempt sophisticated research projects using simultaneous simulations of nature, culture, and technology not achievable by ordinary microcomputers. Schools can get their students online by entering contests and seeking grants and partnerships with…

  16. 超级计算中心核心应用的浅析%Brief Exploration on Technical Development of Key Applications at Supercomputing Center

    Institute of Scientific and Technical Information of China (English)

    党岗; 程志全

    2013-01-01

    目前,我国国家级超算中心大多采用“地方政府投资、以市场为导向开展应用”的建设思路,地方政府更关心涉及本地企事业单位的高性能计算应用和服务,超算中心常被用于普通的应用,很难充分发挥超级计算的战略作用.如何让超算中心这艘能力超强的航母生存下来,进而“攻城掠地”,推动技术创新,一直是业内人士研究的课题.初步探讨了国内超算中心核心应用所面临的挑战,提出了超算中心核心应用服务地方建设的几点建议.%National supercomputing centers at China work use building thought of local government investigation, and market-oriented application performing. Supercomputing resources are always applied at general applications,as the local govenment more focuses on the high-performance computing applications and services related to local business, rather than supercomputing working as strategical role in the traditional way. It is a long-term researching topic how to make the supercomputing powerful as a super-carrier active and applicable to benefit the technical innovation. Some challenging technical issues suiting for the superomputing were discussed by taking domestic supercomputing center as the example, and some useful advises were addressed for applying international supercomputing center at local services.

  17. INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Maeno, T [Brookhaven National Laboratory (BNL); Mashinistov, R. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Nilsson, P [Brookhaven National Laboratory (BNL); Novikov, A. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Poyda, A. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Ryabinkin, E. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Teslyuk, A. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Tsulaia, V. [Lawrence Berkeley National Laboratory (LBNL); Velikhov, V. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Wen, G. [University of Wisconsin, Madison; Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a variety of

  18. Integration of Panda Workload Management System with supercomputers

    Science.gov (United States)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  19. Applications of parallel supercomputers: Scientific results and computer science lessons

    Energy Technology Data Exchange (ETDEWEB)

    Fox, G.C.

    1989-07-12

    Parallel Computing has come of age with several commercial and inhouse systems that deliver supercomputer performance. We illustrate this with several major computations completed or underway at Caltech on hypercubes, transputer arrays and the SIMD Connection Machine CM-2 and AMT DAP. Applications covered are lattice gauge theory, computational fluid dynamics, subatomic string dynamics, statistical and condensed matter physics,theoretical and experimental astronomy, quantum chemistry, plasma physics, grain dynamics, computer chess, graphics ray tracing, and Kalman filters. We use these applications to compare the performance of several advanced architecture computers including the conventional CRAY and ETA-10 supercomputers. We describe which problems are suitable for which computers in the terms of a matching between problem and computer architecture. This is part of a set of lessons we draw for hardware, software, and performance. We speculate on the emergence of new academic disciplines motivated by the growing importance of computers. 138 refs., 23 figs., 10 tabs.

  20. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  1. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  2. Supercomputers ready for use as discovery machines for neuroscience

    OpenAIRE

    Kunkel, Susanne; Schmidt, Maximilian; Helias, Moritz; Eppler, Jochen Martin; Igarashi, Jun; Masumoto, Gen; Fukai, Tomoki; Ishii, Shin; Plesser, Hans Ekkehard; Morrison, Abigail; Diesmann, Markus

    2013-01-01

    NEST is a widely used tool to simulate biological spiking neural networks [1]. The simulator is subject to continuous development, which is driven by the requirements of the current neuroscientific questions. At present, a major part of the software development focuses on the improvement of the simulator's fundamental data structures in order to enable brain-scale simulations on supercomputers such as the Blue Gene system in Jülich and the K computer in Kobe. Based on our memory-u...

  3. Scientists turn to supercomputers for knowledge about universe

    CERN Multimedia

    White, G

    2003-01-01

    The DOE is funding the computers at the Center for Astrophysical Thermonuclear Flashes which is based at the University of Chicago and uses supercomputers at the nation's weapons labs to study explosions in and on certain stars. The DOE is picking up the project's bill in the hope that the work will help the agency learn to better simulate the blasts of nuclear warheads (1 page).

  4. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  5. Study of ATLAS TRT performance with GRID and supercomputers

    Science.gov (United States)

    Krasnopevtsev, D. V.; Klimentov, A. A.; Mashinistov, R. Yu.; Belyaev, N. L.; Ryabinkin, E. A.

    2016-09-01

    One of the most important studies dedicated to be solved for ATLAS physical analysis is a reconstruction of proton-proton events with large number of interactions in Transition Radiation Tracker. Paper includes Transition Radiation Tracker performance results obtained with the usage of the ATLAS GRID and Kurchatov Institute's Data Processing Center including Tier-1 grid site and supercomputer as well as analysis of CPU efficiency during these studies.

  6. From Thread to Transcontinental Computer: Disturbing Lessons in Distributed Supercomputing

    CERN Document Server

    Groen, Derek

    2015-01-01

    We describe the political and technical complications encountered during the astronomical CosmoGrid project. CosmoGrid is a numerical study on the formation of large scale structure in the universe. The simulations are challenging due to the enormous dynamic range in spatial and temporal coordinates, as well as the enormous computer resources required. In CosmoGrid we dealt with the computational requirements by connecting up to four supercomputers via an optical network and make them operate as a single machine. This was challenging, if only for the fact that the supercomputers of our choice are separated by half the planet, as three of them are located scattered across Europe and fourth one is in Tokyo. The co-scheduling of multiple computers and the 'gridification' of the code enabled us to achieve an efficiency of up to $93\\%$ for this distributed intercontinental supercomputer. In this work, we find that high-performance computing on a grid can be done much more effectively if the sites involved are will...

  7. Proceedings of the first energy research power supercomputer users symposium

    Energy Technology Data Exchange (ETDEWEB)

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University.

  8. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  9. Dapsone Topical

    Science.gov (United States)

    ... these steps: Gently wash the affected skin and pat dry with a soft towel. Ask your doctor ... back pain shortness of breath tiredness weakness dark brown urine fever yellow or pale skin Dapsone topical ...

  10. Ciclopirox Topical

    Science.gov (United States)

    ... Do not use nail polish or other nail cosmetic products on nails treated with ciclopirox topical solution. ... as well as any products such as vitamins, minerals, or other dietary supplements. You should bring this ...

  11. Tretinoin Topical

    Science.gov (United States)

    ... lotions, astringents, and perfumes); they can sting your skin, especially when you first use tretinoin.Do not use any other topical medications, especially benzoyl peroxide, salicylic acid (wart remover), and dandruff shampoos containing sulfur or ...

  12. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    Energy Technology Data Exchange (ETDEWEB)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S [Earth Sciences Department. Barcelona Supercomputing Center. Barcelona (Spain); Cuevas, E [Izanaa Atmospheric Research Center. Agencia Estatal de Meteorologia, Tenerife (Spain); Nickovic, S [Atmospheric Research and Environment Branch, World Meteorological Organization, Geneva (Switzerland)], E-mail: carlos.perez@bsc.es

    2009-03-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  13. Programming Environment for a High-Performance Parallel Supercomputer with Intelligent Communication

    OpenAIRE

    A. Gunzinger; BÄumle, B.; Frey, M.; Klebl, M.; Kocheisen, M.; Kohler, P.; Morel, R.; Müller, U; Rosenthal, M

    1996-01-01

    At the Electronics Laboratory of the Swiss Federal Institute of Technology (ETH) in Zürich, the high-performance parallel supercomputer MUSIC (MUlti processor System with Intelligent Communication) has been developed. As applications like neural network simulation and molecular dynamics show, the Electronics Laboratory supercomputer is absolutely on par with those of conventional supercomputers, but electric power requirements are reduced by a factor of 1,000, weight is reduced by a factor of...

  14. Numerical simulations of astrophysical problems on massively parallel supercomputers

    Science.gov (United States)

    Kulikov, Igor; Chernykh, Igor; Glinsky, Boris

    2016-10-01

    In this paper, we propose the last version of the numerical model for simulation of astrophysical objects dynamics, and a new realization of our AstroPhi code for Intel Xeon Phi based RSC PetaStream supercomputers. The co-design of a computational model for the description of astrophysical objects is described. The parallel implementation and scalability tests of the AstroPhi code are presented. We achieve a 73% weak scaling efficiency with using of 256x Intel Xeon Phi accelerators with 61440 threads.

  15. AENEAS A Custom-built Parallel Supercomputer for Quantum Gravity

    CERN Document Server

    Hamber, H W

    1998-01-01

    Accurate Quantum Gravity calculations, based on the simplicial lattice formulation, are computationally very demanding and require vast amounts of computer resources. A custom-made 64-node parallel supercomputer capable of performing up to $2 \\times 10^{10}$ floating point operations per second has been assembled entirely out of commodity components, and has been operational for the last ten months. It will allow the numerical computation of a variety of quantities of physical interest in quantum gravity and related field theories, including the estimate of the critical exponents in the vicinity of the ultraviolet fixed point to an accuracy of a few percent.

  16. A special purpose silicon compiler for designing supercomputing VLSI systems

    Science.gov (United States)

    Venkateswaran, N.; Murugavel, P.; Kamakoti, V.; Shankarraman, M. J.; Rangarajan, S.; Mallikarjun, M.; Karthikeyan, B.; Prabhakar, T. S.; Satish, V.; Venkatasubramaniam, P. R.

    1991-01-01

    Design of general/special purpose supercomputing VLSI systems for numeric algorithm execution involves tackling two important aspects, namely their computational and communication complexities. Development of software tools for designing such systems itself becomes complex. Hence a novel design methodology has to be developed. For designing such complex systems a special purpose silicon compiler is needed in which: the computational and communicational structures of different numeric algorithms should be taken into account to simplify the silicon compiler design, the approach is macrocell based, and the software tools at different levels (algorithm down to the VLSI circuit layout) should get integrated. In this paper a special purpose silicon (SPS) compiler based on PACUBE macrocell VLSI arrays for designing supercomputing VLSI systems is presented. It is shown that turn-around time and silicon real estate get reduced over the silicon compilers based on PLA's, SLA's, and gate arrays. The first two silicon compiler characteristics mentioned above enable the SPS compiler to perform systolic mapping (at the macrocell level) of algorithms whose computational structures are of GIPOP (generalized inner product outer product) form. Direct systolic mapping on PLA's, SLA's, and gate arrays is very difficult as they are micro-cell based. A novel GIPOP processor is under development using this special purpose silicon compiler.

  17. Solidification in a Supercomputer: From Crystal Nuclei to Dendrite Assemblages

    Science.gov (United States)

    Shibuta, Yasushi; Ohno, Munekazu; Takaki, Tomohiro

    2015-08-01

    Thanks to the recent progress in high-performance computational environments, the range of applications of computational metallurgy is expanding rapidly. In this paper, cutting-edge simulations of solidification from atomic to microstructural levels performed on a graphics processing unit (GPU) architecture are introduced with a brief introduction to advances in computational studies on solidification. In particular, million-atom molecular dynamics simulations captured the spontaneous evolution of anisotropy in a solid nucleus in an undercooled melt and homogeneous nucleation without any inducing factor, which is followed by grain growth. At the microstructural level, the quantitative phase-field model has been gaining importance as a powerful tool for predicting solidification microstructures. In this paper, the convergence behavior of simulation results obtained with this model is discussed, in detail. Such convergence ensures the reliability of results of phase-field simulations. Using the quantitative phase-field model, the competitive growth of dendrite assemblages during the directional solidification of a binary alloy bicrystal at the millimeter scale is examined by performing two- and three-dimensional large-scale simulations by multi-GPU computation on the supercomputer, TSUBAME2.5. This cutting-edge approach using a GPU supercomputer is opening a new phase in computational metallurgy.

  18. The TeraGyroid Experiment – Supercomputing 2003

    Directory of Open Access Journals (Sweden)

    R.J. Blake

    2005-01-01

    Full Text Available Amphiphiles are molecules with hydrophobic tails and hydrophilic heads. When dispersed in solvents, they self assemble into complex mesophases including the beautiful cubic gyroid phase. The goal of the TeraGyroid experiment was to study defect pathways and dynamics in these gyroids. The UK's supercomputing and USA's TeraGrid facilities were coupled together, through a dedicated high-speed network, into a single computational Grid for research work that peaked around the Supercomputing 2003 conference. The gyroids were modeled using lattice Boltzmann methods with parameter spaces explored using many 1283 and 3grid point simulations, this data being used to inform the world's largest three-dimensional time dependent simulation with 10243-grid points. The experiment generated some 2 TBytes of useful data. In terms of Grid technology the project demonstrated the migration of simulations (using Globus middleware to and fro across the Atlantic exploiting the availability of resources. Integration of the systems accelerated the time to insight. Distributed visualisation of the output datasets enabled the parameter space of the interactions within the complex fluid to be explored from a number of sites, informed by discourse over the Access Grid. The project was sponsored by EPSRC (UK and NSF (USA with trans-Atlantic optical bandwidth provided by British Telecommunications.

  19. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [ORNL; New, Joshua Ryan [ORNL; Edwards, Richard [ORNL; Parker, Lynne Edwards [ORNL

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.

  20. Optimizing Linpack Benchmark on GPU-Accelerated Petascale Supercomputer

    Institute of Scientific and Technical Information of China (English)

    Feng Wang; Can-Qun Yang; Yun-Fei Du; Juan Chen; Hui-Zhan Yi; Wei-Xia Xu

    2011-01-01

    In this paper we present the programming of the Linpack benchmark on TianHe-1 system,the first petascale supercomputer system of China,and the largest GPU-accelerated heterogeneous system ever attempted before.A hybrid programming model consisting of MPI,OpenMP and streaming computing is described to explore the task parallel,thread parallel and data parallel of the Linpack.We explain how we optimized the load distribution across the CPUs and GPUs using the two-level adaptive method and describe the implementation in details.To overcome the low-bandwidth between the CPU and GPU communication,we present a software pipelining technique to hide the communication overhead.Combined with other traditional optimizations,the Linpack we developed achieved 196.7 GFLOPS on a single compute element of TianHe-1.This result is 70.1% of the peak compute capability,3.3 times faster than the result by using the vendor's library.On the full configuration of TianHe-1 our optimizations resulted in a Linpack performance of 0.563 PFLOPS,which made TianHe-1 the 5th fastest supercomputer on the Top500 list in November,2009.

  1. Topical anesthesia

    Directory of Open Access Journals (Sweden)

    Mritunjay Kumar

    2015-01-01

    Full Text Available Topical anesthetics are being widely used in numerous medical and surgical sub-specialties such as anesthesia, ophthalmology, otorhinolaryngology, dentistry, urology, and aesthetic surgery. They cause superficial loss of pain sensation after direct application. Their delivery and effectiveness can be enhanced by using free bases; by increasing the drug concentration, lowering the melting point; by using physical and chemical permeation enhancers and lipid delivery vesicles. Various topical anesthetic agents available for use are eutectic mixture of local anesthetics, ELA-max, lidocaine, epinephrine, tetracaine, bupivanor, 4% tetracaine, benzocaine, proparacaine, Betacaine-LA, topicaine, lidoderm, S-caine patch™ and local anesthetic peel. While using them, careful attention must be paid to their pharmacology, area and duration of application, age and weight of the patients and possible side-effects.

  2. Topics in Nonlinear Dynamics

    DEFF Research Database (Denmark)

    Mosekilde, Erik

    Through a significant number of detailed and realistic examples this book illustrates how the insights gained over the past couple of decades in the fields of nonlinear dynamics and chaos theory can be applied in practice. Aomng the topics considered are microbiological reaction systems, ecological...

  3. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  4. Scalability Test of multiscale fluid-platelet model for three top supercomputers

    Science.gov (United States)

    Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny

    2016-07-01

    We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources.

  5. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  6. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    Barreiro Megino, Fernando Harald; The ATLAS collaboration

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS ex- periment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercom- puter at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA Pilot framework for ...

  7. Lectures in Supercomputational Neurosciences Dynamics in Complex Brain Networks

    CERN Document Server

    Graben, Peter beim; Thiel, Marco; Kurths, Jürgen

    2008-01-01

    Computational Neuroscience is a burgeoning field of research where only the combined effort of neuroscientists, biologists, psychologists, physicists, mathematicians, computer scientists, engineers and other specialists, e.g. from linguistics and medicine, seem to be able to expand the limits of our knowledge. The present volume is an introduction, largely from the physicists' perspective, to the subject matter with in-depth contributions by system neuroscientists. A conceptual model for complex networks of neurons is introduced that incorporates many important features of the real brain, such as various types of neurons, various brain areas, inhibitory and excitatory coupling and the plasticity of the network. The computational implementation on supercomputers, which is introduced and discussed in detail in this book, will enable the readers to modify and adapt the algortihm for their own research. Worked-out examples of applications are presented for networks of Morris-Lecar neurons to model the cortical co...

  8. Toward the Graphics Turing Scale on a Blue Gene Supercomputer

    CERN Document Server

    McGuigan, Michael

    2008-01-01

    We investigate raytracing performance that can be achieved on a class of Blue Gene supercomputers. We measure a 822 times speedup over a Pentium IV on a 6144 processor Blue Gene/L. We measure the computational performance as a function of number of processors and problem size to determine the scaling performance of the raytracing calculation on the Blue Gene. We find nontrivial scaling behavior at large number of processors. We discuss applications of this technology to scientific visualization with advanced lighting and high resolution. We utilize three racks of a Blue Gene/L in our calculations which is less than three percent of the the capacity of the worlds largest Blue Gene computer.

  9. Direct numerical simulation of turbulence using GPU accelerated supercomputers

    Science.gov (United States)

    Khajeh-Saeed, Ali; Blair Perot, J.

    2013-02-01

    Direct numerical simulations of turbulence are optimized for up to 192 graphics processors. The results from two large GPU clusters are compared to the performance of corresponding CPU clusters. A number of important algorithm changes are necessary to access the full computational power of graphics processors and these adaptations are discussed. It is shown that the handling of subdomain communication becomes even more critical when using GPU based supercomputers. The potential for overlap of MPI communication with GPU computation is analyzed and then optimized. Detailed timings reveal that the internal calculations are now so efficient that the operations related to MPI communication are the primary scaling bottleneck at all but the very largest problem sizes that can fit on the hardware. This work gives a glimpse of the CFD performance issues will dominate many hardware platform in the near future.

  10. Internal computational fluid mechanics on supercomputers for aerospace propulsion systems

    Science.gov (United States)

    Andersen, Bernhard H.; Benson, Thomas J.

    1987-01-01

    The accurate calculation of three-dimensional internal flowfields for application towards aerospace propulsion systems requires computational resources available only on supercomputers. A survey is presented of three-dimensional calculations of hypersonic, transonic, and subsonic internal flowfields conducted at the Lewis Research Center. A steady state Parabolized Navier-Stokes (PNS) solution of flow in a Mach 5.0, mixed compression inlet, a Navier-Stokes solution of flow in the vicinity of a terminal shock, and a PNS solution of flow in a diffusing S-bend with vortex generators are presented and discussed. All of these calculations were performed on either the NAS Cray-2 or the Lewis Research Center Cray XMP.

  11. Refinement of herpesvirus B-capsid structure on parallel supercomputers.

    Science.gov (United States)

    Zhou, Z H; Chiu, W; Haskell, K; Spears, H; Jakana, J; Rixon, F J; Scott, L R

    1998-01-01

    Electron cryomicroscopy and icosahedral reconstruction are used to obtain the three-dimensional structure of the 1250-A-diameter herpesvirus B-capsid. The centers and orientations of particles in focal pairs of 400-kV, spot-scan micrographs are determined and iteratively refined by common-lines-based local and global refinement procedures. We describe the rationale behind choosing shared-memory multiprocessor computers for executing the global refinement, which is the most computationally intensive step in the reconstruction procedure. This refinement has been implemented on three different shared-memory supercomputers. The speedup and efficiency are evaluated by using test data sets with different numbers of particles and processors. Using this parallel refinement program, we refine the herpesvirus B-capsid from 355-particle images to 13-A resolution. The map shows new structural features and interactions of the protein subunits in the three distinct morphological units: penton, hexon, and triplex of this T = 16 icosahedral particle.

  12. Solving global shallow water equations on heterogeneous supercomputers.

    Science.gov (United States)

    Fu, Haohuan; Gan, Lin; Yang, Chao; Xue, Wei; Wang, Lanning; Wang, Xinliang; Huang, Xiaomeng; Yang, Guangwen

    2017-01-01

    The scientific demand for more accurate modeling of the climate system calls for more computing power to support higher resolutions, inclusion of more component models, more complicated physics schemes, and larger ensembles. As the recent improvements in computing power mostly come from the increasing number of nodes in a system and the integration of heterogeneous accelerators, how to scale the computing problems onto more nodes and various kinds of accelerators has become a challenge for the model development. This paper describes our efforts on developing a highly scalable framework for performing global atmospheric modeling on heterogeneous supercomputers equipped with various accelerators, such as GPU (Graphic Processing Unit), MIC (Many Integrated Core), and FPGA (Field Programmable Gate Arrays) cards. We propose a generalized partition scheme of the problem domain, so as to keep a balanced utilization of both CPU resources and accelerator resources. With optimizations on both computing and memory access patterns, we manage to achieve around 8 to 20 times speedup when comparing one hybrid GPU or MIC node with one CPU node with 12 cores. Using a customized FPGA-based data-flow engines, we see the potential to gain another 5 to 8 times improvement on performance. On heterogeneous supercomputers, such as Tianhe-1A and Tianhe-2, our framework is capable of achieving ideally linear scaling efficiency, and sustained double-precision performances of 581 Tflops on Tianhe-1A (using 3750 nodes) and 3.74 Pflops on Tianhe-2 (using 8644 nodes). Our study also provides an evaluation on the programming paradigm of various accelerator architectures (GPU, MIC, FPGA) for performing global atmospheric simulation, to form a picture about both the potential performance benefits and the programming efforts involved.

  13. Virtualizing Super-Computation On-Board Uas

    Science.gov (United States)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  14. Non-preconditioned conjugate gradient on cell and FPCA-based hybrid supercomputer nodes

    Energy Technology Data Exchange (ETDEWEB)

    Dubois, David H [Los Alamos National Laboratory; Dubois, Andrew J [Los Alamos National Laboratory; Boorman, Thomas M [Los Alamos National Laboratory; Connor, Carolyn M [Los Alamos National Laboratory

    2009-03-10

    This work presents a detailed implementation of a double precision, Non-Preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{trademark} in conjunction with x86 Opteron{trademark} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  15. Non-preconditioned conjugate gradient on cell and FPGA based hybrid supercomputer nodes

    Energy Technology Data Exchange (ETDEWEB)

    Dubois, David H [Los Alamos National Laboratory; Dubois, Andrew J [Los Alamos National Laboratory; Boorman, Thomas M [Los Alamos National Laboratory; Connor, Carolyn M [Los Alamos National Laboratory

    2009-01-01

    This work presents a detailed implementation of a double precision, non-preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{sup TM} in conjunction with x86 Opteron{sup TM} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  16. Programming Environment for a High-Performance Parallel Supercomputer with Intelligent Communication

    Directory of Open Access Journals (Sweden)

    A. Gunzinger

    1996-01-01

    Full Text Available At the Electronics Laboratory of the Swiss Federal Institute of Technology (ETH in Zürich, the high-performance parallel supercomputer MUSIC (MUlti processor System with Intelligent Communication has been developed. As applications like neural network simulation and molecular dynamics show, the Electronics Laboratory supercomputer is absolutely on par with those of conventional supercomputers, but electric power requirements are reduced by a factor of 1,000, weight is reduced by a factor of 400, and price is reduced by a factor of 100. Software development is a key issue of such parallel systems. This article focuses on the programming environment of the MUSIC system and on its applications.

  17. Requirements for supercomputing in energy research: The transition to massively parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    1993-02-01

    This report discusses: The emergence of a practical path to TeraFlop computing and beyond; requirements of energy research programs at DOE; implementation: supercomputer production computing environment on massively parallel computers; and implementation: user transition to massively parallel computing.

  18. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Supercomputing plays a major role in many areas of science and engineering, and it has had tremendous impact for decades in areas such as aerospace, defense, energy,...

  19. SUPERCOMPUTERS FOR AIDING ECONOMIC PROCESSES WITH REFERENCE TO THE FINANCIAL SECTOR

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2014-12-01

    Full Text Available The article discusses the use of supercomputers to support business processes with particular emphasis on the financial sector. A reference was made to the selected projects that support economic development. In particular, we propose the use of supercomputers to perform artificial intel-ligence methods in banking. The proposed methods combined with modern technology enables a significant increase in the competitiveness of enterprises and banks by adding new functionality.

  20. A novel VLSI processor architecture for supercomputing arrays

    Science.gov (United States)

    Venkateswaran, N.; Pattabiraman, S.; Devanathan, R.; Ahmed, Ashaf; Venkataraman, S.; Ganesh, N.

    1993-01-01

    Design of the processor element for general purpose massively parallel supercomputing arrays is highly complex and cost ineffective. To overcome this, the architecture and organization of the functional units of the processor element should be such as to suit the diverse computational structures and simplify mapping of complex communication structures of different classes of algorithms. This demands that the computation and communication structures of different class of algorithms be unified. While unifying the different communication structures is a difficult process, analysis of a wide class of algorithms reveals that their computation structures can be expressed in terms of basic IP,IP,OP,CM,R,SM, and MAA operations. The execution of these operations is unified on the PAcube macro-cell array. Based on this PAcube macro-cell array, we present a novel processor element called the GIPOP processor, which has dedicated functional units to perform the above operations. The architecture and organization of these functional units are such to satisfy the two important criteria mentioned above. The structure of the macro-cell and the unification process has led to a very regular and simpler design of the GIPOP processor. The production cost of the GIPOP processor is drastically reduced as it is designed on high performance mask programmable PAcube arrays.

  1. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Ryabinkin, E.; Wenaus, T.

    2016-02-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed Analysis)Workload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF), is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF's Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  2. Developing and Deploying Advanced Algorithms to Novel Supercomputing Hardware

    CERN Document Server

    Brunner, Robert J; Myers, Adam D

    2007-01-01

    The objective of our research is to demonstrate the practical usage and orders of magnitude speedup of real-world applications by using alternative technologies to support high performance computing. Currently, the main barrier to the widespread adoption of this technology is the lack of development tools and case studies that typically impede non-specialists that might otherwise develop applications that could leverage these technologies. By partnering with the Innovative Systems Laboratory at the National Center for Supercomputing, we have obtained access to several novel technologies, including several Field-Programmable Gate Array (FPGA) systems, NVidia Graphics Processing Units (GPUs), and the STI Cell BE platform. Our goal is to not only demonstrate the capabilities of these systems, but to also serve as guides for others to follow in our path. To date, we have explored the efficacy of the SRC-6 MAP-C and MAP-E and SGI RASC Athena and RC100 reconfigurable computing platforms in supporting a two-point co...

  3. Micro-mechanical Simulations of Soils using Massively Parallel Supercomputers

    Directory of Open Access Journals (Sweden)

    David W. Washington

    2004-06-01

    Full Text Available In this research a computer program, Trubal version 1.51, based on the Discrete Element Method was converted to run on a Connection Machine (CM-5,a massively parallel supercomputer with 512 nodes, to expedite the computational times of simulating Geotechnical boundary value problems. The dynamic memory algorithm in Trubal program did not perform efficiently in CM-2 machine with the Single Instruction Multiple Data (SIMD architecture. This was due to the communication overhead involving global array reductions, global array broadcast and random data movement. Therefore, a dynamic memory algorithm in Trubal program was converted to a static memory arrangement and Trubal program was successfully converted to run on CM-5 machines. The converted program was called "TRUBAL for Parallel Machines (TPM." Simulating two physical triaxial experiments and comparing simulation results with Trubal simulations validated the TPM program. With a 512 nodes CM-5 machine TPM produced a nine-fold speedup demonstrating the inherent parallelism within algorithms based on the Discrete Element Method.

  4. Developing Fortran Code for Kriging on the Stampede Supercomputer

    Science.gov (United States)

    Hodgess, Erin

    2016-04-01

    Kriging is easily accessible in the open source statistical language R (R Core Team, 2015) in the gstat (Pebesma, 2004) package. It works very well, but can be slow on large data sets, particular if the prediction space is large as well. We are working on the Stampede supercomputer at the Texas Advanced Computing Center to develop code using a combination of R and the Message Passage Interface (MPI) bindings to Fortran. We have a function similar to the autofitVariogram found in the automap (Hiemstra {et al}, 2008) package and it is very effective. We are comparing R with MPI/Fortran, MPI/Fortran alone, and R with the Rmpi package, which uses bindings to C. We will present results from simulation studies and real-world examples. References: Hiemstra, P.H., Pebesma, E.J., Twenhofel, C.J.W. and G.B.M. Heuvelink, 2008. Real-time automatic interpolation of ambient gamma dose rates from the Dutch Radioactivity Monitoring Network. Computers and Geosciences, accepted for publication. Pebesma, E.J., 2004. Multivariable geostatistics in S: the gstat package. Computers and Geosciences, 30: 683-691. R Core Team, 2015. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.

  5. Using the multistage cube network topology in parallel supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Siegel, H.J.; Nation, W.G. (Purdue Univ., Lafayette, IN (USA). School of Electrical Engineering); Kruskal, C.P. (Maryland Univ., College Park, MD (USA). Dept. of Computer Science); Napolitano, L.M. Jr. (Sandia National Labs., Livermore, CA (USA))

    1989-12-01

    A variety of approaches to designing the interconnection network to support communications among the processors and memories of supercomputers employing large-scale parallel processing have been proposed and/or implemented. These approaches are often based on the multistage cube topology. This topology is the subject of much ongoing research and study because of the ways in which the multistage cube can be used. The attributes of the topology that make it useful are described. These include O(N log{sub 2} N) cost for an N input/output network, decentralized control, a variety of implementation options, good data permuting capability to support single instruction stream/multiple data stream (SIMD) parallelism, good throughput to support multiple instruction stream/multiple data stream (MIMD) parallelism, and ability to be partitioned into independent subnetworks to support reconfigurable systems. Examples of existing systems that use multistage cube networks are overviewed. The multistage cube topology can be converted into a single-stage network by associating with each switch in the network a processor (and a memory). Properties of systems that use the multistage cube network in this way are also examined.

  6. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Directory of Open Access Journals (Sweden)

    De K.

    2016-01-01

    Full Text Available The Large Hadron Collider (LHC, operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed AnalysisWorkload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF, is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF’s Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  7. Supercomputers ready for use as discovery machines for neuroscience

    Directory of Open Access Journals (Sweden)

    Moritz eHelias

    2012-11-01

    Full Text Available NEST is a widely used tool to simulate biological spiking neural networks. Here we explain theimprovements, guided by a mathematical model of memory consumption, that enable us to exploitfor the first time the computational power of the K supercomputer for neuroscience. Multi-threadedcomponents for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling.K is capable of simulating networks corresponding to a brain area with 10^8 neurons and 10^12 synapsesin the worst case scenario of random connectivity; for larger networks of the brain its hierarchicalorganization can be exploited to constrain the number of communicating computer nodes. Wediscuss the limits of the software technology, comparing maximum-□lling scaling plots for K andthe JUGENE BG/P system. The usability of these machines for network simulations has becomecomparable to running simulations on a single PC. Turn-around times in the range of minutes evenfor the largest systems enable a quasi-interactive working style and render simulations on this scalea practical tool for computational neuroscience.

  8. Supercomputers ready for use as discovery machines for neuroscience.

    Science.gov (United States)

    Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus

    2012-01-01

    NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 10(8) neurons and 10(12) synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience.

  9. Massively-parallel electrical-conductivity imaging of hydrocarbonsusing the Blue Gene/L supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Commer, M.; Newman, G.A.; Carazzone, J.J.; Dickens, T.A.; Green,K.E.; Wahrmund, L.A.; Willen, D.E.; Shiu, J.

    2007-05-16

    Large-scale controlled source electromagnetic (CSEM)three-dimensional (3D) geophysical imaging is now receiving considerableattention for electrical conductivity mapping of potential offshore oiland gas reservoirs. To cope with the typically large computationalrequirements of the 3D CSEM imaging problem, our strategies exploitcomputational parallelism and optimized finite-difference meshing. Wereport on an imaging experiment, utilizing 32,768 tasks/processors on theIBM Watson Research Blue Gene/L (BG/L) supercomputer. Over a 24-hourperiod, we were able to image a large scale marine CSEM field data setthat previously required over four months of computing time ondistributed clusters utilizing 1024 tasks on an Infiniband fabric. Thetotal initial data misfit could be decreased by 67 percent within 72completed inversion iterations, indicating an electrically resistiveregion in the southern survey area below a depth of 1500 m below theseafloor. The major part of the residual misfit stems from transmitterparallel receiver components that have an offset from the transmittersail line (broadside configuration). Modeling confirms that improvedbroadside data fits can be achieved by considering anisotropic electricalconductivities. While delivering a satisfactory gross scale image for thedepths of interest, the experiment provides important evidence for thenecessity of discriminating between horizontal and verticalconductivities for maximally consistent 3D CSEM inversions.

  10. Credibility improves topical blog post retrieval

    NARCIS (Netherlands)

    Weerkamp, W.; de Rijke, M.

    2008-01-01

    Topical blog post retrieval is the task of ranking blog posts with respect to their relevance for a given topic. To improve topical blog post retrieval we incorporate textual credibility indicators in the retrieval process. We consider two groups of indicators: post level (determined using

  11. Credibility improves topical blog post retrieval

    NARCIS (Netherlands)

    Weerkamp, W.; de Rijke, M.

    2008-01-01

    Topical blog post retrieval is the task of ranking blog posts with respect to their relevance for a given topic. To improve topical blog post retrieval we incorporate textual credibility indicators in the retrieval process. We consider two groups of indicators: post level (determined using informati

  12. Cyberdyn supercomputer - a tool for imaging geodinamic processes

    Science.gov (United States)

    Pomeran, Mihai; Manea, Vlad; Besutiu, Lucian; Zlagnean, Luminita

    2014-05-01

    More and more physical processes developed within the deep interior of our planet, but with significant impact on the Earth's shape and structure, become subject to numerical modelling by using high performance computing facilities. Nowadays, worldwide an increasing number of research centers decide to make use of such powerful and fast computers for simulating complex phenomena involving fluid dynamics and get deeper insight to intricate problems of Earth's evolution. With the CYBERDYN cybernetic infrastructure (CCI), the Solid Earth Dynamics Department in the Institute of Geodynamics of the Romanian Academy boldly steps into the 21st century by entering the research area of computational geodynamics. The project that made possible this advancement, has been jointly supported by EU and Romanian Government through the Structural and Cohesion Funds. It lasted for about three years, ending October 2013. CCI is basically a modern high performance Beowulf-type supercomputer (HPCC), combined with a high performance visualization cluster (HPVC) and a GeoWall. The infrastructure is mainly structured around 1344 cores and 3 TB of RAM. The high speed interconnect is provided by a Qlogic InfiniBand switch, able to transfer up to 40 Gbps. The CCI storage component is a 40 TB Panasas NAS. The operating system is Linux (CentOS). For control and maintenance, the Bright Cluster Manager package is used. The SGE job scheduler manages the job queues. CCI has been designed for a theoretical peak performance up to 11.2 TFlops. Speed tests showed that a high resolution numerical model (256 × 256 × 128 FEM elements) could be resolved with a mean computational speed of 1 time step at 30 seconds, by employing only a fraction of the computing power (20%). After passing the mandatory tests, the CCI has been involved in numerical modelling of various scenarios related to the East Carpathians tectonic and geodynamic evolution, including the Neogene magmatic activity, and the intriguing

  13. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to

  14. Data mining method for anomaly detection in the supercomputer task flow

    Science.gov (United States)

    Voevodin, Vadim; Voevodin, Vladimir; Shaikhislamov, Denis; Nikitenko, Dmitry

    2016-10-01

    The efficiency of most supercomputer applications is extremely low. At the same time, the user rarely even suspects that their applications may be wasting computing resources. Software tools need to be developed to help detect inefficient applications and report them to the users. We suggest an algorithm for detecting anomalies in the supercomputer's task flow, based on a data mining methods. System monitoring is used to calculate integral characteristics for every job executed, and the data is used as input for our classification method based on the Random Forest algorithm. The proposed approach can currently classify the application as one of three classes - normal, suspicious and definitely anomalous. The proposed approach has been demonstrated on actual applications running on the "Lomonosov" supercomputer.

  15. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  16. High-Performance Computing: Industry Uses of Supercomputers and High-Speed Networks. Report to Congressional Requesters.

    Science.gov (United States)

    General Accounting Office, Washington, DC. Information Management and Technology Div.

    This report was prepared in response to a request for information on supercomputers and high-speed networks from the Senate Committee on Commerce, Science, and Transportation, and the House Committee on Science, Space, and Technology. The following information was requested: (1) examples of how various industries are using supercomputers to…

  17. Topical report review status

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-08-01

    This report provides industry with procedures for submitting topical reports, guidance on how the U.S. Nuclear Regulatory Commission (NRC) processes and responds to topical report submittals, and an accounting, with review schedules, of all topical reports currently accepted for review schedules, of all topical reports currently accepted for review by the NRC. This report will be published annually. Each sponsoring organization with one or more topical reports accepted for review copies.

  18. Argonne National Lab deploys Force10 networks' massively dense ethernet switch for supercomputing cluster

    CERN Multimedia

    2003-01-01

    "Force10 Networks, Inc. today announced that Argonne National Laboratory (Argonne, IL) has successfully deployed Force10 E-Series switch/routers to connect to the TeraGrid, the world's largest supercomputing grid, sponsored by the National Science Foundation (NSF)" (1/2 page).

  19. Design and performance characterization of electronic structure calculations on massively parallel supercomputers

    DEFF Research Database (Denmark)

    Romero, N. A.; Glinsvad, Christian; Larsen, Ask Hjorth

    2013-01-01

    Density function theory (DFT) is the most widely employed electronic structure method because of its favorable scaling with system size and accuracy for a broad range of molecular and condensed-phase systems. The advent of massively parallel supercomputers has enhanced the scientific community's ...

  20. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  1. The impact of the U.S. supercomputing initiative will be global

    Energy Technology Data Exchange (ETDEWEB)

    Crawford, Dona [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-15

    Last July, President Obama issued an executive order that created a coordinated federal strategy for HPC research, development, and deployment called the U.S. National Strategic Computing Initiative (NSCI). However, this bold, necessary step toward building the next generation of supercomputers has inaugurated a new era for U.S. high performance computing (HPC).

  2. Congressional Panel Seeks To Curb Access of Foreign Students to U.S. Supercomputers.

    Science.gov (United States)

    Kiernan, Vincent

    1999-01-01

    Fearing security problems, a congressional committee on Chinese espionage recommends that foreign students and other foreign nationals be barred from using supercomputers at national laboratories unless they first obtain export licenses from the federal government. University officials dispute the data on which the report is based and find the…

  3. [Experience in simulating the structural and dynamic features of small proteins using table supercomputers].

    Science.gov (United States)

    Kondrat'ev, M S; Kabanov, A V; Komarov, V M; Khechinashvili, N N; Samchenko, A A

    2011-01-01

    The results of theoretical studies of the structural and dynamic features of peptides and small proteins have been presented that were carried out by quantum chemical and molecular dynamics methods in high-performance graphic stations, "table supercomputers", using distributed calculations by the CUDA technology.

  4. Child Development & Behavior Topics

    Science.gov (United States)

    ... Your Child Topics Commentaries Featured Links Contact Us Child Development & Behavior Topics A B C D E F ... Seat Safety Carbon Monoxide Chewing Tobacco Child Care Child Development Milestones Child Development, What Do Grown-Ups Know ...

  5. Freshman Health Topics

    Science.gov (United States)

    Hovde, Karen

    2011-01-01

    This article examines a cluster of health topics that are frequently selected by students in lower division classes. Topics address issues relating to addictive substances, including alcohol and tobacco, eating disorders, obesity, and dieting. Analysis of the topics examines their interrelationships and organization in the reference literature.…

  6. Interactive steering of supercomputing simulation for aerodynamic noise radiated from square cylinder; Supercomputer wo mochiita steering system ni yoru kakuchu kara hoshasareru kurikion no suchi kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Yokono, Y. [Toshiba Corp., Tokyo (Japan); Fujita, H. [Tokyo Inst. of Technology, Tokyo (Japan). Precision Engineering Lab.

    1995-03-25

    This paper describes extensive computer simulation for aerodynamic noise radiated from a square cylinder using an interactive steering supercomputing simulation system. The unsteady incompressible three-dimensional Navier-Stokes equations are solved by the finite volume method using a steering system which can visualize the numerical process during calculation and alter the numerical parameter. Using the fluctuating surface pressure of the square cylinder, the farfield sound pressure is calculated based on Lighthill-Curle`s equation. The results are compared with those of low noise wind tunnel experiments, and good agreement is observed for the peak spectrum frequency of the sound pressure level. 14 refs., 10 figs.

  7. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  8. Syntactic Topic Models

    CERN Document Server

    Boyd-Graber, Jordan

    2010-01-01

    The syntactic topic model (STM) is a Bayesian nonparametric model of language that discovers latent distributions of words (topics) that are both semantically and syntactically coherent. The STM models dependency parsed corpora where sentences are grouped into documents. It assumes that each word is drawn from a latent topic chosen by combining document-level features and the local syntactic context. Each document has a distribution over latent topics, as in topic models, which provides the semantic consistency. Each element in the dependency parse tree also has a distribution over the topics of its children, as in latent-state syntax models, which provides the syntactic consistency. These distributions are convolved so that the topic of each word is likely under both its document and syntactic context. We derive a fast posterior inference algorithm based on variational methods. We report qualitative and quantitative studies on both synthetic data and hand-parsed documents. We show that the STM is a more pred...

  9. Syntacticized topics in Kurmuk

    DEFF Research Database (Denmark)

    Andersen, Torben

    2015-01-01

    This article argues that Kurmuk, a little-described Western Nilotic language, is characterized by a syntacticized topic whose grammatical relation is variable. In this language, declarative clauses have as topic an obligatory preverbal NP which is either a subject, an object or an adjunct....... The grammatical relation of the topic is expressed by a voice-like inflection of the verb, here called orientation. While subject-orientation is morphologically unmarked, object-oriented and adjunct-oriented verbs are marked by a subject suffix or by a suffix indicating that the topic is not subject, and adjunct......-orientation differs from object-orientation by a marked tone pattern. Topic choice largely reflects information structure by indicating topic continuity. The topic also plays a crucial role in relative clauses and in clauses with contrastive constituent focus, in that objects and adjuncts can only be relativized...

  10. PREFACE: CEWQO Topical Issue CEWQO Topical Issue

    Science.gov (United States)

    Bozic, Mirjana; Man'ko, Margarita

    2009-09-01

    This topical issue of Physica Scripta collects selected peer-reviewed contributions based on invited and contributed talks and posters presented at the 15th Central European Workshop on Quantum Optics (CEWQO) which took place in Belgrade 29 May-3 June 2008 (http://cewqo08.phy.bg.ac.yu). On behalf of the whole community took place in Belgrade 29 May-3 June 2008 (http://cewqo08.phy.bg.ac.yu, cewqo08.phy.bg.ac.yu). On behalf of the whole community of the workshop, we thank the referees for their careful reading and useful suggestions which helped to improve all of the submitted papers. A brief description of CEWQO The Central European Workshop on Quantum Optics is a series of conferences started informally in Budapest in 1992. Sometimes small events transform into important conferences, as in the case of CEWQO. Professor Jozsef Janszky, from the Research Institute of Solid State Physics and Optics, is the founder of this series. Margarita Man'ko obtained the following information from Jozsef Janszky during her visit to Budapest, within the framework of cooperation between the Russian and Hungarian Academies of Sciences in 2005. He organized a small workshop on quantum optics in Budapest in 1992 with John Klauder as a main speaker. Then, bearing in mind that a year before Janszky himself was invited by Vladimir Buzek to give a seminar on the same topic in Bratislava, he decided to assign the name 'Central European Workshop on Quantum Optics', considering the seminar in Bratislava to be the first workshop and the one in Budapest the second. The third formal workshop took place in Bratislava in 1993 organized by Vladimir Buzek, then in 1994 (Budapest, by Jozsef Janszky), 1995 and 1996 (Budmerice, Slovakia, by Vladimir Buzek), 1997 (Prague, by Igor Jex), 1999 (Olomouc, Czech Republic, by Zdenek Hradil), 2000 (Balatonfüred, Hungary, by Jozsef Janszky ), 2001 (Prague, by Igor Jex), 2002 (Szeged, Hungary, by Mihaly Benedict), 2003 (Rostock,Germany, by Werner Vogel and

  11. The TianHe-1A Supercomputer: Its Hardware and Software

    Institute of Scientific and Technical Information of China (English)

    Xue-Jun Yang; Xiang-Ke Liao; Kai Lu; Qing-Feng Hu; Jun-Qiang Song; Jin-Shu Su

    2011-01-01

    This paper presents an overview of TianHe-1A (TH-1A) supercomputer, which is built by National University of Defense Technology of China (NUDT). TH-1A adopts a hybrid architecture by integrating CPUs and GPUs, and its interconnect network is a proprietary high-speed communication network. The theoretical peak performance of TH-1A is 4700TFlops, and its LINPACK test result is 2566TFlops. It was ranked the No. 1 on the TOP500 List released in November, 2010. TH-1A is now deployed in National Supercomputer Center in Tianjin and provides high performance computing services. TH-1A has played an important role in many applications, such as oil exploration, weather forecast, bio-medical research.

  12. HACC: Simulating Sky Surveys on State-of-the-Art Supercomputing Architectures

    CERN Document Server

    Habib, Salman; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukic, Zarija; Sehrish, Saba; Liao, Wei-keng

    2014-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of prog...

  13. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    Science.gov (United States)

    Cabrillo, I.; Cabellos, L.; Marco, J.; Fernandez, J.; Gonzalez, I.

    2014-06-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  14. Sandia`s network for supercomputing `95: Validating the progress of Asynchronous Transfer Mode (ATM) switching

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, T.J.; Vahle, O.; Gossage, S.A.

    1996-04-01

    The Advanced Networking Integration Department at Sandia National Laboratories has used the annual Supercomputing conference sponsored by the IEEE and ACM for the past three years as a forum to demonstrate and focus communication and networking developments. For Supercomputing `95, Sandia elected: to demonstrate the functionality and capability of an AT&T Globeview 20Gbps Asynchronous Transfer Mode (ATM) switch, which represents the core of Sandia`s corporate network, to build and utilize a three node 622 megabit per second Paragon network, and to extend the DOD`s ACTS ATM Internet from Sandia, New Mexico to the conference`s show floor in San Diego, California, for video demonstrations. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations supports Sandia`s overall strategies in ATM networking.

  15. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  16. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  17. Towards 21st century stellar models: Star clusters, supercomputing and asteroseismology

    Science.gov (United States)

    Campbell, S. W.; Constantino, T. N.; D'Orazi, V.; Meakin, C.; Stello, D.; Christensen-Dalsgaard, J.; Kuehn, C.; De Silva, G. M.; Arnett, W. D.; Lattanzio, J. C.; MacLean, B. T.

    2016-09-01

    Stellar models provide a vital basis for many aspects of astronomy and astrophysics. Recent advances in observational astronomy - through asteroseismology, precision photometry, high-resolution spectroscopy, and large-scale surveys - are placing stellar models under greater quantitative scrutiny than ever. The model limitations are being exposed and the next generation of stellar models is needed as soon as possible. The current uncertainties in the models propagate to the later phases of stellar evolution, hindering our understanding of stellar populations and chemical evolution. Here we give a brief overview of the evolution, importance, and substantial uncertainties of core helium burning stars in particular and then briefly discuss a range of methods, both theoretical and observational, that we are using to advance the modelling. This study uses observational data from from HST, VLT, AAT, Kepler, and supercomputing resources in Australia provided by the National Computational Infrastructure (NCI) and Pawsey Supercomputing Centre.

  18. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Esteban [University of Pittsburgh; Ni, Xiang [University of Illinois at Urbana-Champaign; Jones, Terry R [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  19. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  20. TSP:A Heterogeneous Multiprocessor Supercomputing System Based on i860XP

    Institute of Scientific and Technical Information of China (English)

    黄国勇; 李三立

    1994-01-01

    Numerous new RISC processors provide support for supercomputing.By using the “mini-Cray” i860 superscalar processor,an add-on board has been developed to boost the performance of a real time system.A parallel heterogeneous multiprocessor surercomputing system,TSP,is constructed.In this paper,we present the system design consideration and described the architecture of the TSP and its features.

  1. US Department of Energy High School Student Supercomputing Honors Program: A follow-up assessment

    Energy Technology Data Exchange (ETDEWEB)

    1987-01-01

    The US DOE High School Student Supercomputing Honors Program was designed to recognize high school students with superior skills in mathematics and computer science and to provide them with formal training and experience with advanced computer equipment. This document reports on the participants who attended the first such program, which was held at the National Magnetic Fusion Energy Computer Center at the Lawrence Livermore National Laboratory (LLNL) during August 1985.

  2. Application of Supercomputer Technologies for Simulation Of Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Vladimir Valentinovich Okrepilov

    2015-06-01

    Full Text Available To date, an extensive experience has been accumulated in investigation of problems related to quality, assessment of management systems, modeling of economic system sustainability. The performed studies have created a basis for development of a new research area — Economics of Quality. Its tools allow to use opportunities of model simulation for construction of the mathematical models adequately reflecting the role of quality in natural, technical, social regularities of functioning of the complex socio-economic systems. Extensive application and development of models, and also system modeling with use of supercomputer technologies, on our deep belief, will bring the conducted research of socio-economic systems to essentially new level. Moreover, the current scientific research makes a significant contribution to model simulation of multi-agent social systems and that is not less important, it belongs to the priority areas in development of science and technology in our country. This article is devoted to the questions of supercomputer technologies application in public sciences, first of all, — regarding technical realization of the large-scale agent-focused models (AFM. The essence of this tool is that owing to the power computer increase it has become possible to describe the behavior of many separate fragments of a difficult system, as socio-economic systems are. The article also deals with the experience of foreign scientists and practicians in launching the AFM on supercomputers, and also the example of AFM developed in CEMI RAS, stages and methods of effective calculating kernel display of multi-agent system on architecture of a modern supercomputer will be analyzed. The experiments on the basis of model simulation on forecasting the population of St. Petersburg according to three scenarios as one of the major factors influencing the development of socio-economic system and quality of life of the population are presented in the

  3. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  4. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  5. [Topical therapy of rosacea].

    Science.gov (United States)

    Schöfer, H

    2013-07-01

    Metronidazole and azelaic acid are the only topical medications approved for rosacea. All other topical treatments for rosacea and its special forms are used off-label. Topical steroids are not indicated in rosacea, because of their side effects (induction of steroid rosacea, high risk of facial skin atrophy, and high risk of rebound after cessation of therapy). Topical as well as systemic steroids are allowed only as initial and short term therapy for acute forms of rosacea (e.g. rosacea fulminans). Papular and pustular rosacea is the major indication for topical therapy. Sebaceous gland and connective tissue hyperplasia in glandular-hypertrophic rosacea as well as erythema in erythematous rosacea do not respond well to topical measures. A new active substance, the alpha-2-adrenoreceptor agonist brimonidine, will be approved soon for the topical treatment of erythema in rosacea. All severe forms of rosacea should initially be treated with a combination of topical and systemic agents. After improvement of the clinical symptoms, topical treatment alone is usually adequate to maintain the control.

  6. Women's Health Topics

    Science.gov (United States)

    ... Health Diabetes Healthy Aging Heart Health Mammograms Caregiving Menopause Pregnancy Safe Medication Use Other Topics like cosmetics and nutrition Other Resources womenshealth.gov Federal Citizen Information Center ...

  7. Researching Distressing Topics

    Directory of Open Access Journals (Sweden)

    Sharon Jackson

    2013-05-01

    Full Text Available Qualitative researchers who explore sensitive topics may expose themselves to emotional distress. Consequently, researchers are often faced with the challenge of maintaining emotional equilibrium during the research process. However, discussion on the management of difficult emotions has occupied a peripheral place within accounts of research practice. With rare exceptions, the focus of published accounts is concentrated on the analysis of the emotional phenomena that emerge during the collection of primary research data. Hence, there is a comparative absence of a dialogue around the emotional dimensions of working with secondary data sources. This article highlights some of the complex ways in which emotions enter the research process during secondary analysis, and the ways in which we engaged with and managed emotional states such as anger, sadness, and horror. The concepts of emotional labor and emotional reflexivity are used to consider the ways in which we “worked with” and “worked on” emotion. In doing so, we draw on our collective experiences of working on two collaborative projects with ChildLine Scotland in which a secondary analysis was conducted on children’s narratives of distress, worry, abuse, and neglect.

  8. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk (Kitware, Inc., Clifton Park, NY); Fabian, Nathan; Marion, Patrick (Kitware, Inc., Clifton Park, NY); Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  9. Large-scale Particle Simulations for Debris Flows using Dynamic Load Balance on a GPU-rich Supercomputer

    Science.gov (United States)

    Tsuzuki, Satori; Aoki, Takayuki

    2016-04-01

    Numerical simulations for debris flows including a countless of objects is one of important topics in fluid dynamics and many engineering applications. Particle-based method is a promising approach to carry out the simulations for flows interacting with objects. In this paper, we propose an efficient method to realize a large-scale simulation for fluid-structure interaction by combining SPH (Smoothed Particle Hydrodynamics) method for fluid with DEM (Discrete Element Method) for objects on a multi-GPU system. By applying space filling curves to decomposition of the computational domain, we are able to contain the same number of particles in each decomposed domain. In our implementation, several techniques for particle counting and data movement have been introduced. Fragmentation of the memory used for particles happens during the time-integration and the frequency of de-fragmentation is examined by taking account for computational load balance and the communication cost between CPU and GPU. A link-list technique of the particle interaction is introduced to save the memory drastically. It is found that the sorting of particle data for the neighboring particle list using linked-list method improves the memory access greatly with a certain interval. The weak and strong scalabilities for a SPH simulation using 111 Million particles was measured from 4 GPUs to 512 GPUs for three types of space filling curves. A large-scale debris flow simulation of tsunami with 10,368 floating rubbles using 117 Million particles were successfully carried out with 256 GPUs on the TSUBAME 2.5 supercomputer at Tokyo Institute of Technology.

  10. BLAS (Basic Linear Algebra Subroutines), linear algebra modules, and supercomputers. Technical report for period ending 15 December 1984

    Energy Technology Data Exchange (ETDEWEB)

    Rice, J.R.

    1984-12-31

    On October 29 and 30, 1984 about 20 people met at Purdue University to consider extensions to the Basic Linear Algebra Subroutines (BLAS) and linear algebra software modules in general. The need for these extensions and new sets of modules is largely due to the advent of new supercomputer architectures which make it difficult for ordinary coding techniques to achieve even a significant fraction of the potential computing power. The workshop format was one of informal presentations with ample discussions followed by sessions of general discussions of the issues raised. This report is a summary of the presentations, the issues raised, the conclusions reached and the open issue discussions. Each participant had an opportunity to comment on this report, but it also clearly reflects the author's filtering of the extensive discussions. Section 2 describes seven proposals for linear algebra software modules and Section 3 describes four presentations on the use of such modules. Discussion summaries are given next; Section 4 for those where near concensus was reached and Section 5 where the issues were left open.

  11. Topic Identification in Discourse

    CERN Document Server

    Chen, K

    1995-01-01

    This paper proposes a corpus-based language model for topic identification. We analyze the association of noun-noun and noun-verb pairs in LOB Corpus. The word association norms are based on three factors: 1) word importance, 2) pair co-occurrence, and 3) distance. They are trained on the paragraph and sentence levels for noun-noun and noun-verb pairs, respectively. Under the topic coherence postulation, the nouns that have the strongest connectivities with the other nouns and verbs in the discourse form the preferred topic set. The collocational semantics then is used to identify the topics from paragraphs and to discuss the topic shift phenomenon among paragraphs.

  12. Considering Student Coaching

    Science.gov (United States)

    Keen, James P.

    2014-01-01

    What does student coaching involve and what considerations make sense in deciding to engage an outside contractor to provide personal coaching? The author explores coaching in light of his own professional experience and uses this reflection as a platform from which to consider the pros and cons of student coaching when deciding whether to choose…

  13. Diclofenac Topical (osteoarthritis pain)

    Science.gov (United States)

    ... gel (Voltaren) is used to relieve pain from osteoarthritis (arthritis caused by a breakdown of the lining ... Diclofenac topical liquid (Pennsaid) is used to relieve osteoarthritis pain in the knees. Diclofenac is in a ...

  14. Advanced Topics in Aerodynamics

    DEFF Research Database (Denmark)

    Filippone, Antonino

    1999-01-01

    "Advanced Topics in Aerodynamics" is a comprehensive electronic guide to aerodynamics,computational fluid dynamics, aeronautics, aerospace propulsion systems, design and relatedtechnology. We report data, tables, graphics, sketches,examples, results, photos, technical andscientific literature...

  15. Advanced Topics in Aerodynamics

    DEFF Research Database (Denmark)

    Filippone, Antonino

    1999-01-01

    "Advanced Topics in Aerodynamics" is a comprehensive electronic guide to aerodynamics,computational fluid dynamics, aeronautics, aerospace propulsion systems, design and relatedtechnology. We report data, tables, graphics, sketches,examples, results, photos, technical andscientific literature...

  16. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    Science.gov (United States)

    Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

  17. Integration of PanDA workload management system with Titan supercomputer at OLCF

    Science.gov (United States)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  18. Supplemental topics on voids

    Energy Technology Data Exchange (ETDEWEB)

    Rood, H.J.

    1988-09-01

    Several topics concerning voids are presented, supplementing the report of Rood (1988). The discovery of the Coma supercluster and void and the recognition of the cosmological significance of superclusters and voids are reviewed. Galaxy redshift surveys and redshift surveys for the Abell clusters and very distant objects are discussed. Solar system and extragalactic dynamics are examined. Also, topics for future observational research on voids are recommended. 50 references.

  19. Topical treatment of melasma

    Directory of Open Access Journals (Sweden)

    Bandyopadhyay Debabrata

    2009-01-01

    Full Text Available Melasma is a common hypermelanotic disorder affecting the face that is associated with considerable psychological impacts. The management of melasma is challenging and requires a long-term treatment plan. In addition to avoidance of aggravating factors like oral pills and ultraviolet exposure, topical therapy has remained the mainstay of treatment. Multiple options for topical treatment are available, of which hydroquinone (HQ is the most commonly prescribed agent. Besides HQ, other topical agents for which varying degrees of evidence for clinical efficacy exist include azelaic acid, kojic acid, retinoids, topical steroids, glycolic acid, mequinol, and arbutin. Topical medications modify various stages of melanogenesis, the most common mode of action being inhibition of the enzyme, tyrosinase. Combination therapy is the preferred mode of treatment for the synergism and reduction of untoward effects. The most popular combination consists of HQ, a topical steroid, and retinoic acid. Prolonged HQ usage may lead to untoward effects like depigmentation and exogenous ochronosis. The search for safer alternatives has given rise to the development of many newer agents, several of them from natural sources. Well-designed controlled clinical trials are needed to clarify their role in the routine management of melasma.

  20. TOPICAL TREATMENT OF MELASMA

    Science.gov (United States)

    Bandyopadhyay, Debabrata

    2009-01-01

    Melasma is a common hypermelanotic disorder affecting the face that is associated with considerable psychological impacts. The management of melasma is challenging and requires a long-term treatment plan. In addition to avoidance of aggravating factors like oral pills and ultraviolet exposure, topical therapy has remained the mainstay of treatment. Multiple options for topical treatment are available, of which hydroquinone (HQ) is the most commonly prescribed agent. Besides HQ, other topical agents for which varying degrees of evidence for clinical efficacy exist include azelaic acid, kojic acid, retinoids, topical steroids, glycolic acid, mequinol, and arbutin. Topical medications modify various stages of melanogenesis, the most common mode of action being inhibition of the enzyme, tyrosinase. Combination therapy is the preferred mode of treatment for the synergism and reduction of untoward effects. The most popular combination consists of HQ, a topical steroid, and retinoic acid. Prolonged HQ usage may lead to untoward effects like depigmentation and exogenous ochronosis. The search for safer alternatives has given rise to the development of many newer agents, several of them from natural sources. Well-designed controlled clinical trials are needed to clarify their role in the routine management of melasma. PMID:20101327

  1. Development of the general interpolants method for the CYBER 200 series of supercomputers

    Science.gov (United States)

    Stalnaker, J. F.; Robinson, M. A.; Spradley, L. W.; Kurzius, S. C.; Thoenes, J.

    1988-01-01

    The General Interpolants Method (GIM) is a 3-D, time-dependent, hybrid procedure for generating numerical analogs of the conservation laws. This study is directed toward the development and application of the GIM computer code for fluid dynamic research applications as implemented for the Cyber 200 series of supercomputers. An elliptic and quasi-parabolic version of the GIM code are discussed. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and an implicit finite difference scheme are also included.

  2. Scalable parallel programming for high performance seismic simulation on petascale heterogeneous supercomputers

    Science.gov (United States)

    Zhou, Jun

    The 1994 Northridge earthquake in Los Angeles, California, killed 57 people, injured over 8,700 and caused an estimated $20 billion in damage. Petascale simulations are needed in California and elsewhere to provide society with a better understanding of the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures. As the heterogeneous supercomputing infrastructures are becoming more common, numerical developments in earthquake system research are particularly challenged by the dependence on the accelerator elements to enable "the Big One" simulations with higher frequency and finer resolution. Reducing time to solution and power consumption are two primary focus area today for the enabling technology of fault rupture dynamics and seismic wave propagation in realistic 3D models of the crust's heterogeneous structure. This dissertation presents scalable parallel programming techniques for high performance seismic simulation running on petascale heterogeneous supercomputers. A real world earthquake simulation code, AWP-ODC, one of the most advanced earthquake codes to date, was chosen as the base code in this research, and the testbed is based on Titan at Oak Ridge National Laboraratory, the world's largest hetergeneous supercomputer. The research work is primarily related to architecture study, computation performance tuning and software system scalability. An earthquake simulation workflow has also been developed to support the efficient production sets of simulations. The highlights of the technical development are an aggressive performance optimization focusing on data locality and a notable data communication model that hides the data communication latency. This development results in the optimal computation efficiency and throughput for the 13-point stencil code on heterogeneous systems, which can be extended to general high-order stencil codes. Started from scratch, the hybrid CPU/GPU version of AWP

  3. Accelerating Virtual High-Throughput Ligand Docking: current technology and case study on a petascale supercomputer.

    Science.gov (United States)

    Ellingson, Sally R; Dakshanamurthy, Sivanesan; Brown, Milton; Smith, Jeremy C; Baudry, Jerome

    2014-04-25

    In this paper we give the current state of high-throughput virtual screening. We describe a case study of using a task-parallel MPI (Message Passing Interface) version of Autodock4 [1], [2] to run a virtual high-throughput screen of one-million compounds on the Jaguar Cray XK6 Supercomputer at Oak Ridge National Laboratory. We include a description of scripts developed to increase the efficiency of the predocking file preparation and postdocking analysis. A detailed tutorial, scripts, and source code for this MPI version of Autodock4 are available online at http://www.bio.utk.edu/baudrylab/autodockmpi.htm.

  4. Topical Drugs for Pain Relief

    Directory of Open Access Journals (Sweden)

    Anjali Srinivasan

    2015-03-01

    Full Text Available Topical therapy helps patients with oral and perioral pain problems such as ulcers, burning mouth syndrome, temporomandibular disorders, neuromas, neuropathies and neuralgias. Topical drugs used in the field of dentistry are topical anaesthetics, topical analgesics, topical antibiotics and topical corticosteroids. It provides symptomatic/curative effect. Topical drugs are easy to apply, avoids hepatic first pass metabolism and more sites specific. But it can only be used for medications that require low plasma concentrations to achieve a therapeutic effect.

  5. Topical Drugs for Pain Relief

    OpenAIRE

    Anjali Srinivasan; Prashanth Shenai; Laxmikanth Chatra; Veena KM; Prasanna Kumar Rao

    2015-01-01

    Topical therapy helps patients with oral and perioral pain problems such as ulcers, burning mouth syndrome, temporomandibular disorders, neuromas, neuropathies and neuralgias. Topical drugs used in the field of dentistry are topical anaesthetics, topical analgesics, topical antibiotics and topical corticosteroids. It provides symptomatic/curative effect. Topical drugs are easy to apply, avoids hepatic first pass metabolism and more sites specific. But it can only be used for medications that ...

  6. Use of QUADRICS supercomputer as embedded simulator in emergency management systems; Utilizzo del calcolatore QUADRICS come simulatore in linea in un sistema di gestione delle emergenze

    Energy Technology Data Exchange (ETDEWEB)

    Bove, R.; Di Costanzo, G.; Ziparo, A. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dip. Energia

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system.

  7. Telecommuting. Factors to consider.

    Science.gov (United States)

    D'Arruda, K A

    2001-10-01

    1. Telecommuting is a work arrangement in which employees work part time or full time from their homes or smaller telework centers. They communicate with employers via computer. 2. Telecommuting can raise legal issues for companies. Can telecommuting be considered a reasonable accommodation under the Americans With Disabilities Act? When at home, is a worker injured within the course and scope of their employment for purposes of workers' compensation? 3. Occupational and environmental health nurses may need to alter existing programs to meet the distinct needs of telecommuters. Often, there are ergonomic issues and home office safety issues which are not of concern to other employees. Additionally, occupational and environmental health nurses may have to offer programs in new formats (e.g., Internet or Intranet programs) to effectively communicate with teleworkers.

  8. Implicit stage topics

    Directory of Open Access Journals (Sweden)

    Karen Lahousse

    2008-04-01

    Full Text Available Il a souvent été proposé que les éléments spatio-temporels en position initiale de phrase spécifient le cadre de l’événement dénoté par la proposition et ont une interprétation thématique ou topicale. Alors que les topiques spatio-temporels explicites ont souvent été étudiés, Erteschik-Schir (1997, 1999 propose l’idée que les topiques spatio-temporels, ou topiques scéniques (stage topics peuvent aussi être implicites.Dans cet article, nous offrons des arguments en faveur de la notion de topique scénique implicite. Nous montrons qu’un certain nombre de cas d’inversion nominale en français, une configuration syntaxique qui est favorisée par la présence d’un topique scénique explicite, s’expliquent par la présence d’un topique scénique implicite. Le fait que les topiques scéniques implicites interagissent avec la structure syntaxique de la même façon que les topiques scéniques explicites constitue un argument empirique en faveur de leur existence.It has often been proposed that sentence-initial spatio-temporal elements specify the frame in which the whole proposition takes place and are topical (i.e. thematic. Whereas considerable attention has been paid to explicit spatio-temporal topics, Erteschik-Shir (1997, 1999 argues that spatio-temporal topics, or stage topics, can also be implicit.In this article we provide evidence in favour of the notion of implicit stage topic. We show that a certain number of nominal inversion cases in French, a syntactic configuration which is triggered by the presence of an explicit stage topic, are explained by the presence of an implicit stage topic. The fact that implicit stage topics interact with syntactic structure the same way explicit stage topics do constitutes a strong empirical argument in favour of their existence.

  9. Scheduling Supercomputers.

    Science.gov (United States)

    1983-02-01

    no task is scheduled with overlap. Let numpi be the total number of preemptions and idle slots of size at most to that are introduced. We see that if...no usable block remains on Qm-*, then numpi < m-k. Otherwise, numpi ! m-k-1. If j>n when this procedure terminates, then all tasks have been scheduled

  10. Grassroots Supercomputing

    CERN Multimedia

    Buchanan, Mark

    2005-01-01

    What started out as a way for SETI to plow through its piles or radio-signal data from deep space has turned into a powerful research tool as computer users acrosse the globe donate their screen-saver time to projects as diverse as climate-change prediction, gravitational-wave searches, and protein folding (4 pages)

  11. Safety of Topical Dermatologic Medications in Pregnancy.

    Science.gov (United States)

    Patel, Viral M; Schwartz, Robert A; Lambert, W Clark

    2016-07-01

    Dermatologic drugs should be employed with caution in women of childbearing age who are pregnant or considering pregnancy. Topical drugs have little systemic absorption. Therefore, they are deemed safer than oral or parenteral agents and less likely to harm the fetus. However, their safety profile must be assessed cautiously, as there is limited available data. In this article, we aggregate human and animal studies and provide recommendations on using topical dermatologic medications in pregnancy. J Drugs Dermatol. 2016;15(7):830-834.

  12. Feynman diagrams sampling for quantum field theories on the QPACE 2 supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Rappl, Florian

    2016-08-01

    This work discusses the application of Feynman diagram sampling in quantum field theories. The method uses a computer simulation to sample the diagrammatic space obtained in a series expansion. For running large physical simulations powerful computers are obligatory, effectively splitting the thesis in two parts. The first part deals with the method of Feynman diagram sampling. Here the theoretical background of the method itself is discussed. Additionally, important statistical concepts and the theory of the strong force, quantum chromodynamics, are introduced. This sets the context of the simulations. We create and evaluate a variety of models to estimate the applicability of diagrammatic methods. The method is then applied to sample the perturbative expansion of the vertex correction. In the end we obtain the value for the anomalous magnetic moment of the electron. The second part looks at the QPACE 2 supercomputer. This includes a short introduction to supercomputers in general, as well as a closer look at the architecture and the cooling system of QPACE 2. Guiding benchmarks of the InfiniBand network are presented. At the core of this part, a collection of best practices and useful programming concepts are outlined, which enables the development of efficient, yet easily portable, applications for the QPACE 2 system.

  13. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration; Klimentov, Alexei; Oleynik, Danila; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently uses more than 100,000 cores at well over 100 Grid sites with a peak performance of 0.3 petaFLOPS, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real tim...

  14. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    De, Kaushik; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Vaniachine, Alexandre; Wenaus, Torre; Schovancova, Jaroslava

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real time, information about unused...

  15. PREFACE: HITES 2012: 'Horizons of Innovative Theories, Experiments, and Supercomputing in Nuclear Physics'

    Science.gov (United States)

    Hecht, K. T.

    2012-12-01

    This volume contains the contributions of the speakers of an international conference in honor of Jerry Draayer's 70th birthday, entitled 'Horizons of Innovative Theories, Experiments and Supercomputing in Nuclear Physics'. The list of contributors includes not only international experts in these fields, but also many former collaborators, former graduate students, and former postdoctoral fellows of Jerry Draayer, stressing innovative theories such as special symmetries and supercomputing, both of particular interest to Jerry. The organizers of the conference intended to honor Jerry Draayer not only for his seminal contributions in these fields, but also for his administrative skills at departmental, university, national and international level. Signed: Ted Hecht University of Michigan Conference photograph Scientific Advisory Committee Ani AprahamianUniversity of Notre Dame Baha BalantekinUniversity of Wisconsin Bruce BarrettUniversity of Arizona Umit CatalyurekOhio State Unversity David DeanOak Ridge National Laboratory Jutta Escher (Chair)Lawrence Livermore National Laboratory Jorge HirschUNAM, Mexico David RoweUniversity of Toronto Brad Sherill & Michigan State University Joel TohlineLouisiana State University Edward ZganjarLousiana State University Organizing Committee Jeff BlackmonLouisiana State University Mark CaprioUniversity of Notre Dame Tomas DytrychLouisiana State University Ana GeorgievaINRNE, Bulgaria Kristina Launey (Co-chair)Louisiana State University Gabriella PopaOhio University Zanesville James Vary (Co-chair)Iowa State University Local Organizing Committee Laura LinhardtLouisiana State University Charlie RascoLouisiana State University Karen Richard (Coordinator)Louisiana State University

  16. Graph visualization for the analysis of the structure and dynamics of extreme-scale supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Berkbigler, K. P. (Kathryn P.); Bush, B. W. (Brian W.); Davis, Kei,; Hoisie, A. (Adolfy); Smith, S. A. (Steve A.)

    2002-01-01

    We are exploring the development and application of information visualization techniques for the analysis of new extreme-scale supercomputer architectures. Modern supercomputers typically comprise very large clusters of commodity SMPs interconnected by possibly dense and often nonstandard networks. The scale, complexity, and inherent nonlocality of the structure and dynamics of this hardware, and the systems and applications distributed over it, challenge traditional analysis methods. As part of the a la carte team at Los Alamos National Laboratory, who are simulating these advanced architectures, we are exploring advanced visualization techniques and creating tools to provide intuitive exploration, discovery, and analysis of these simulations. This work complements existing and emerging algorithmic analysis tools. Here we gives background on the problem domain, a description of a prototypical computer architecture of interest (on the order of 10,000 processors connected by a quaternary fat-tree network), and presentations of several visualizations of the simulation data that make clear the flow of data in the interconnection network.

  17. Groundwater cooling of a supercomputer in Perth, Western Australia: hydrogeological simulations and thermal sustainability

    Science.gov (United States)

    Sheldon, Heather A.; Schaubs, Peter M.; Rachakonda, Praveen K.; Trefry, Michael G.; Reid, Lynn B.; Lester, Daniel R.; Metcalfe, Guy; Poulet, Thomas; Regenauer-Lieb, Klaus

    2015-12-01

    Groundwater cooling (GWC) is a sustainable alternative to conventional cooling technologies for supercomputers. A GWC system has been implemented for the Pawsey Supercomputing Centre in Perth, Western Australia. Groundwater is extracted from the Mullaloo Aquifer at 20.8 °C and passes through a heat exchanger before returning to the same aquifer. Hydrogeological simulations of the GWC system were used to assess its performance and sustainability. Simulations were run with cooling capacities of 0.5 or 2.5 Mega Watts thermal (MWth), with scenarios representing various combinations of pumping rate, injection temperature and hydrogeological parameter values. The simulated system generates a thermal plume in the Mullaloo Aquifer and overlying Superficial Aquifer. Thermal breakthrough (transfer of heat from injection to production wells) occurred in 2.7-4.3 years for a 2.5 MWth system. Shielding (reinjection of cool groundwater between the injection and production wells) resulted in earlier thermal breakthrough but reduced the rate of temperature increase after breakthrough, such that shielding was beneficial after approximately 5 years pumping. Increasing injection temperature was preferable to increasing flow rate for maintaining cooling capacity after thermal breakthrough. Thermal impacts on existing wells were small, with up to 10 wells experiencing a temperature increase ≥ 0.1 °C (largest increase 6 °C).

  18. OpenMC:Towards Simplifying Programming for TianHe Supercomputers

    Institute of Scientific and Technical Information of China (English)

    廖湘科; 杨灿群; 唐滔; 易会战; 王锋; 吴强; 薛京灵

    2014-01-01

    Modern petascale and future exascale systems are massively heterogeneous architectures. Developing produc-tive intra-node programming models is crucial toward addressing their programming challenge. We introduce a directive-based intra-node programming model, OpenMC, and show that this new model can achieve ease of programming, high performance, and the degree of portability desired for heterogeneous nodes, especially those in TianHe supercomputers. While existing models are geared towards offloading computations to accelerators (typically one), OpenMC aims to more uniformly and adequately exploit the potential offered by multiple CPUs and accelerators in a compute node. OpenMC achieves this by providing a unified abstraction of hardware resources as workers and facilitating the exploitation of asyn-chronous task parallelism on the workers. We present an overview of OpenMC, a prototyping implementation, and results from some initial comparisons with OpenMP and hand-written code in developing six applications on two types of nodes from TianHe supercomputers.

  19. Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; van Rosendale, John; Southard, Dale; Gaither, Kelly; Childs, Hank; Brugger, Eric; Ahern, Sean

    2010-12-01

    Supercomputing Centers (SC's) are unique resources that aim to enable scientific knowledge discovery through the use of large computational resources, the Big Iron. Design, acquisition, installation, and management of the Big Iron are activities that are carefully planned and monitored. Since these Big Iron systems produce a tsunami of data, it is natural to co-locate visualization and analysis infrastructure as part of the same facility. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys does not receive the same level of treatment as that of the Big Iron. The main focus of this article is to explore different aspects of planning, designing, fielding, and maintaining the visualization and analysis infrastructure at supercomputing centers. Some of the questions we explore in this article include:"How should the Little Iron be sized to adequately support visualization and analysis of data coming off the Big Iron?" What sort of capabilities does it need to have?" Related questions concern the size of visualization support staff:"How big should a visualization program be (number of persons) and what should the staff do?" and"How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?"

  20. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  1. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, Clyde [Northern Illinois Univ., DeKalb, IL (United States); Karonis, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Lurio, Laurence [Northern Illinois Univ., DeKalb, IL (United States); Piot, Philippe [Northern Illinois Univ., DeKalb, IL (United States); Xiao, Zhili [Northern Illinois Univ., DeKalb, IL (United States); Glatz, Andreas [Northern Illinois Univ., DeKalb, IL (United States); Pohlman, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Hou, Minmei [Northern Illinois Univ., DeKalb, IL (United States); Demir, Veysel [Northern Illinois Univ., DeKalb, IL (United States); Song, Jie [Northern Illinois Univ., DeKalb, IL (United States); Duffin, Kirk [Northern Illinois Univ., DeKalb, IL (United States); Johns, Mitrick [Northern Illinois Univ., DeKalb, IL (United States); Sims, Thomas [Northern Illinois Univ., DeKalb, IL (United States); Yin, Yanbin [Northern Illinois Univ., DeKalb, IL (United States)

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  2. Supercomputer Assisted Generation of Machine Learning Agents for the Calibration of Building Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [ORNL; New, Joshua Ryan [ORNL; Edwards, Richard [ORNL

    2013-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrot pur- poses. EnergyPlus is the agship Department of Energy software that performs BEM for dierent types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manu- ally by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building en- ergy modeling unfeasible for smaller projects. In this paper, we describe the \\Autotune" research which employs machine learning algorithms to generate agents for the dierent kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of En- ergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-eective cali- bration of building models.

  3. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  4. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; D' Azevedo, Eduardo [ORNL; Philip, Bobby [ORNL; Worley, Patrick H [ORNL

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  5. BRICS To Be Considered?

    Directory of Open Access Journals (Sweden)

    Georgy Toloraya

    2016-09-01

    most efficient when the countries consolidate their positions in the Group of 20 or other international organizations and communities. BRICS positions have become very noticeable in such instances. However, the BRICS countries have not yet reached an adequate level of cooperating and promoting the irinterests in the United Nations, which could be considered one trouble spot for the group.This article presents a comprehensive analysis of various factors affecting the ongoing formation and development of the BRICS, and offers possible formats for its institutionalization, expansion with new members, and options for competing with the new structures of economic growth and regional trade and economic unions. It concludes that the BRICS should remain a strategic priority for Russia

  6. Extracting Topic Words from the Web for Dialogue Sentence Generation

    OpenAIRE

    下川, 尚亮; Rafal, Rzepka; 荒木, 健治

    2009-01-01

    In this paper we extract topic words from Internet Relay Chat utterances. In such dialogues there are many more spoken language expressions than in blogs or usual Web pages and we presume that the always changing topic is difficult to determine only by nouns which are usually used for topic recognition. In this paper we propose a method for determining a conversation topic considering also association adjectives and verbs retrieved from the Web. Our first experiments show that extracting asso...

  7. Topical ketoprofen patch.

    Science.gov (United States)

    Mazières, Bernard

    2005-01-01

    Although oral nonsteroidal anti-inflammatory drugs (NSAIDs) are effective in the treatment of a variety of acute and chronic pain conditions, their use may be associated with serious systemic adverse effects, particularly gastrointestinal disorders. In order to minimise the incidence of systemic events related to such agents, topical NSAIDs have been developed. Topical NSAIDs, applied as gels, creams or sprays, penetrate the skin, subcutaneous fatty tissue and muscle in amounts that are sufficient to exert a therapeutic effect on peripheral and central mechanisms in the absence of high plasma concentrations. Data indicate that topical NSAIDs are effective at relieving pain in a number of acute and chronic pain indications. This review article discusses the pharmacokinetics, efficacy and tolerability of a new formulation of ketoprofen available as a topical patch. The topical patch containing ketoprofen 100mg as the active principle has been developed using a novel delivery system that dispenses therapeutic doses of the drug directly to the site of injury. Pharmacokinetic data indicate that although plasma levels of ketoprofen are higher when the drug is administered as a patch versus a gel, the total systemic bioavailability of ketoprofen 100 mg administered via a patch is no more than 10% of that reported for ketoprofen 100 mg administered orally. Because the patch facilitates ketoprofen delivery over a 24-hour period, the drug remains continually present in the tissue subjacent to the site of application. High tissue but low plasma ketoprofen concentrations mean that while tissue concentrations are high enough to exert a therapeutic effect, plasma concentrations remain low enough to not result in systemic adverse events caused by elevated serum NSAID levels. Phase III clinical trials in patients with non-articular rheumatism and traumatic painful soft tissue injuries showed that the topical ketoprofen patch was significantly more effective than placebo at

  8. Multi-Topic Tracking Model for dynamic social network

    Science.gov (United States)

    Li, Yuhua; Liu, Changzheng; Zhao, Ming; Li, Ruixuan; Xiao, Hailing; Wang, Kai; Zhang, Jun

    2016-07-01

    The topic tracking problem has attracted much attention in the last decades. However, existing approaches rarely consider network structures and textual topics together. In this paper, we propose a novel statistical model based on dynamic bayesian network, namely Multi-Topic Tracking Model for Dynamic Social Network (MTTD). It takes influence phenomenon, selection phenomenon, document generative process and the evolution of textual topics into account. Specifically, in our MTTD model, Gibbs Random Field is defined to model the influence of historical status of users in the network and the interdependency between them in order to consider the influence phenomenon. To address the selection phenomenon, a stochastic block model is used to model the link generation process based on the users' interests to topics. Probabilistic Latent Semantic Analysis (PLSA) is used to describe the document generative process according to the users' interests. Finally, the dependence on the historical topic status is also considered to ensure the continuity of the topic itself in topic evolution model. Expectation Maximization (EM) algorithm is utilized to estimate parameters in the proposed MTTD model. Empirical experiments on real datasets show that the MTTD model performs better than Popular Event Tracking (PET) and Dynamic Topic Model (DTM) in generalization performance, topic interpretability performance, topic content evolution and topic popularity evolution performance.

  9. Characters and Topical Diversity

    DEFF Research Database (Denmark)

    Eriksson, Rune

    2014-01-01

    The purpose of this article is to contribute to our understanding of the difference between the bestseller and the non-bestseller in nonfiction. It is noticed that many bestsellers in nonfiction belongs to the sub-genre of creative nonfiction, but also that the topics in this kind of literature i...

  10. Transportation: Topic Paper E.

    Science.gov (United States)

    National Council on the Handicapped, Washington, DC.

    As one of a series of topic papers assessing federal laws and programs affecting persons with disabilities, this paper reviews the issue of transportation services. In the area of urban mass transit, four relevant pieces of legislation and public transportation accessibility regulations are cited, and cost issues are explored. Paratransit systems,…

  11. Characters and Topical Diversity

    DEFF Research Database (Denmark)

    Eriksson, Rune

    2014-01-01

    is largely ignored by the critics. Thus, the article tests how topics may work in creative nonfiction. Two Danish bestsellers belonging to the genre, Frank’s Mit smukke genom ( My Beautiful Genome), about genomics, and Buk-Swienty’s Slagtebænk Dybbøl ( ‘Slaughter-bench Dybbøl’), a history book, are chosen...

  12. Selected topics in magnetism

    CERN Document Server

    Gupta, L C

    1993-01-01

    Part of the ""Frontiers in Solid State Sciences"" series, this volume presents essays on such topics as spin fluctuations in Heisenberg magnets, quenching of spin fluctuations by high magnetic fields, and kondo effect and heavy fermions in rare earths amongst others.

  13. Contrastive topics decomposed

    Directory of Open Access Journals (Sweden)

    Michael Wagner

    2012-12-01

    Full Text Available The analysis of contrastive topics introduced in Büring 1997b and further developed in Büring 2003 relies on distinguishing two types of constituents that introduce alternatives: the sentence focus, which is marked by a FOC feature, and the contrastive topic, which is marked by a CT feature. A non-compositional rule of interpretation that refers to these features is used to derive a topic semantic value, a nested set of sets of propositions. This paper presents evidence for a correlation between the restrictive syntax of nested focus operators and the syntax of contrastive topics, a correlation which is unexpected under this analysis. A compositional analysis is proposed that only makes use of the flatter focus semantic values introduced by focus operators. The analysis aims at integrating insights from the original analysis while at the same time capturing the observed syntactic restrictions. http://dx.doi.org/10.3765/sp.5.8 BibTeX info

  14. Topics for Mathematics Clubs.

    Science.gov (United States)

    Dalton, LeRoy C., Ed.; Snyder, Henry D., Ed.

    The ten chapters in this booklet cover topics not ordinarily discussed in the classroom: Fibonacci sequences, projective geometry, groups, infinity and transfinite numbers, Pascal's Triangle, topology, experiments with natural numbers, non-Euclidean geometries, Boolean algebras, and the imaginary and the infinite in geometry. Each chapter is…

  15. Federal Council on Science, Engineering and Technology: Committee on Computer Research and Applications, Subcommittee on Science and Engineering Computing: The US Supercomputer Industry

    Energy Technology Data Exchange (ETDEWEB)

    1987-12-01

    The Federal Coordinating Council on Science, Engineering, and Technology (FCCSET) Committee on Supercomputing was chartered by the Director of the Office of Science and Technology Policy in 1982 to examine the status of supercomputing in the United States and to recommend a role for the Federal Government in the development of this technology. In this study, the FCCSET Committee (now called the Subcommittee on Science and Engineering Computing of the FCCSET Committee on Computer Research and Applications) reports on the status of the supercomputer industry and addresses changes that have occured since issuance of the 1983 and 1985 reports. The review based upon periodic meetings with and site visits to supercomputer manufacturers and consultation with experts in high performance scientific computing. White papers have been contributed to this report by industry leaders and supercomputer experts.

  16. A Framework for HI Spectral Source Finding Using Distributed-Memory Supercomputing

    CERN Document Server

    Westerlund, Stefan

    2014-01-01

    The latest generation of radio astronomy interferometers will conduct all sky surveys with data products consisting of petabytes of spectral line data. Traditional approaches to identifying and parameterising the astrophysical sources within this data will not scale to datasets of this magnitude, since the performance of workstations will not keep up with the real-time generation of data. For this reason, it is necessary to employ high performance computing systems consisting of a large number of processors connected by a high-bandwidth network. In order to make use of such supercomputers substantial modifications must be made to serial source finding code. To ease the transition, this work presents the Scalable Source Finder Framework, a framework providing storage access, networking communication and data composition functionality, which can support a wide range of source finding algorithms provided they can be applied to subsets of the entire image. Additionally, the Parallel Gaussian Source Finder was imp...

  17. Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base

    Science.gov (United States)

    Katz, Randy H.; Ousterhout, John K.; Patterson, David A.

    1993-01-01

    Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.

  18. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Directory of Open Access Journals (Sweden)

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  19. Large-scale integrated super-computing platform for next generation virtual drug discovery.

    Science.gov (United States)

    Mitchell, Wayne; Matsumoto, Shunji

    2011-08-01

    Traditional drug discovery starts by experimentally screening chemical libraries to find hit compounds that bind to protein targets, modulating their activity. Subsequent rounds of iterative chemical derivitization and rescreening are conducted to enhance the potency, selectivity, and pharmacological properties of hit compounds. Although computational docking of ligands to targets has been used to augment the empirical discovery process, its historical effectiveness has been limited because of the poor correlation of ligand dock scores and experimentally determined binding constants. Recent progress in super-computing, coupled to theoretical insights, allows the calculation of the Gibbs free energy, and therefore accurate binding constants, for usually large ligand-receptor systems. This advance extends the potential of virtual drug discovery. A specific embodiment of the technology, integrating de novo, abstract fragment based drug design, sophisticated molecular simulation, and the ability to calculate thermodynamic binding constants with unprecedented accuracy, are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  1. Operational numerical weather prediction on a GPU-accelerated cluster supercomputer

    Science.gov (United States)

    Lapillonne, Xavier; Fuhrer, Oliver; Spörri, Pascal; Osuna, Carlos; Walser, André; Arteaga, Andrea; Gysi, Tobias; Rüdisühli, Stefan; Osterried, Katherine; Schulthess, Thomas

    2016-04-01

    The local area weather prediction model COSMO is used at MeteoSwiss to provide high resolution numerical weather predictions over the Alpine region. In order to benefit from the latest developments in computer technology the model was optimized and adapted to run on Graphical Processing Units (GPUs). Thanks to these model adaptations and the acquisition of a dedicated hybrid supercomputer a new set of operational applications have been introduced, COSMO-1 (1 km deterministic), COSMO-E (2 km ensemble) and KENDA (data assimilation) at MeteoSwiss. These new applications correspond to an increase of a factor 40x in terms of computational load as compared to the previous operational setup. We present an overview of the porting approach of the COSMO model to GPUs together with a detailed description of and performance results on the new hybrid Cray CS-Storm computer, Piz Kesch.

  2. A CPU/MIC Collaborated Parallel Framework for GROMACS on Tianhe-2 Supercomputer.

    Science.gov (United States)

    Peng, Shaoliang; Yang, Shunyun; Su, Wenhe; Zhang, Xiaoyu; Zhang, Tenglilang; Liu, Weiguo; Zhao, Xingming

    2017-06-16

    Molecular Dynamics (MD) is the simulation of the dynamic behavior of atoms and molecules. As the most popular software for molecular dynamics, GROMACS cannot work on large-scale data because of limit computing resources. In this paper, we propose a CPU and Intel® Xeon Phi Many Integrated Core (MIC) collaborated parallel framework to accelerate GROMACS using the offload mode on a MIC coprocessor, with which the performance of GROMACS is improved significantly, especially with the utility of Tianhe-2 supercomputer. Furthermore, we optimize GROMACS so that it can run on both the CPU and MIC at the same time. In addition, we accelerate multi-node GROMACS so that it can be used in practice. Benchmarking on real data, our accelerated GROMACS performs very well and reduces computation time significantly. Source code: https://github.com/tianhe2/gromacs-mic.

  3. Mixed precision numerical weather prediction on hybrid GPU-CPU supercomputers

    Science.gov (United States)

    Lapillonne, Xavier; Osuna, Carlos; Spoerri, Pascal; Osterried, Katherine; Charpilloz, Christophe; Fuhrer, Oliver

    2017-04-01

    A new version of the climate and weather model COSMO that runs faster on traditional high performance computing systems with CPUs as well as on heterogeneous architectures using graphics processing units (GPUs) has been developed. The model was in addition adapted to be able to run in "single precision" mode. After discussing the key changes introduced in this new model version and the tools used in the porting approach, we present 3 applications, namely the MeteoSwiss operational weather prediction system, COSMO-LEPS and the CALMO project, which already take advantage of the performance improvement, up to a factor 4, by running on GPU system and using the single precision mode. We discuss how the code changes open new perspectives for scientific research and can enable researchers to get access to a new class of supercomputers.

  4. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  5. Modern Gyrokinetic Particle-In-Cell Simulation of Fusion Plasmas on Top Supercomputers

    CERN Document Server

    Wang, Bei; Tang, William; Ibrahim, Khaled; Madduri, Kamesh; Williams, Samuel; Oliker, Leonid

    2015-01-01

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability of the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon...

  6. Dawning Nebulae: A PetaFLOPS Supercomputer with a Heterogeneous Structure

    Institute of Scientific and Technical Information of China (English)

    Ning-Hui Sun; Jing Xing; Zhi-Gang Huo; Guang-Ming Tan; Jin Xiong; Bo Li; Can Ma

    2011-01-01

    Dawning Nebulae is a heterogeneous system composed of 9280 multi-core x86 CPUs and 4640 NVIDIA Fermi GPUs. With a Linpack performance of 1.271 petaFLOPS, it was ranked the second in the TOP500 List released in June 2010. In this paper, key issues in the system design of Dawning Nebulae are introduced. System tuning methodologies aiming at petaFLOPS Linpack result are presented, including algorithmic optimization and communication improvement. The design of its file I/O subsystem, including HVFS and the underlying DCFS3, is also described. Performance evaluations show that the Linpack efficiency of each node reaches 69.89%, and 1024-node aggregate read and write bandwidths exceed 100 GB/s and 70 GB/s respectively. The success of Dawning Nebulae has demonstrated the viability of CPU/GPU heterogeneous structure for future designs of supercomputers.

  7. Novel Topic Authorship Attribution

    Science.gov (United States)

    2011-03-01

    cross-validation, genre shift, vector projection, singular value decomposition, principal component analysis Unclassified Unclassified Unclassified...by eight students. Each student wrote a total of 24 documents in three different genres about three different topics. They found that compensating for...Baayen, H. Halteren, A. Neijt, and F. Tweedie, “Outside the cave of shadows: Using syntactic annotation to enhance authorship attribution,” Literary

  8. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    Science.gov (United States)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  9. Topics in field theory

    CERN Document Server

    Karpilovsky, G

    1989-01-01

    This monograph gives a systematic account of certain important topics pertaining to field theory, including the central ideas, basic results and fundamental methods.Avoiding excessive technical detail, the book is intended for the student who has completed the equivalent of a standard first-year graduate algebra course. Thus it is assumed that the reader is familiar with basic ring-theoretic and group-theoretic concepts. A chapter on algebraic preliminaries is included, as well as a fairly large bibliography of works which are either directly relevant to the text or offer supplementary material of interest.

  10. Topics in Operator Theory

    CERN Document Server

    Ball, Joseph A; Helton, JWilliam; Rodman, Leiba; Spitkovsky, Iiya

    2010-01-01

    This is the first volume of a collection of original and review articles on recent advances and new directions in a multifaceted and interconnected area of mathematics and its applications. It encompasses many topics in theoretical developments in operator theory and its diverse applications in applied mathematics, physics, engineering, and other disciplines. The purpose is to bring in one volume many important original results of cutting edge research as well as authoritative review of recent achievements, challenges, and future directions in the area of operator theory and its applications.

  11. Hot topics for leadership development.

    Science.gov (United States)

    Bleich, Michael R

    2015-02-01

    Three areas stand out from a health systems perspective that should be on the development agenda for all leaders. These topics include population health, predictive analytics, and supply chain management. Together, these topics address access, quality, and cost management.

  12. Health Topics: MedlinePlus

    Science.gov (United States)

    ... this page: https://medlineplus.gov/healthtopics.html Health Topics To use the sharing features on this page, ... regularly reviewed, and links are updated daily. Find topics A-Z Expand Section A B C D ...

  13. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  14. Special Operations Research Topics 2016

    Science.gov (United States)

    2015-01-01

    maintain SOF buying power and establish 10 Special Operations Research Topics 2016 Please send your completed research papers on these topics to the...higher value? As the millennial genera- tion and subsequent generations increasingly rely on social media to connect, how will this impact the safety... buy 48 Special Operations Research Topics 2016 Please send your completed research papers on these topics to the JSOU Center for Special Operations

  15. Topics in orbit equivalence

    CERN Document Server

    Kechris, Alexander S

    2004-01-01

    This volume provides a self-contained introduction to some topics in orbit equivalence theory, a branch of ergodic theory. The first two chapters focus on hyperfiniteness and amenability. Included here are proofs of Dye's theorem that probability measure-preserving, ergodic actions of the integers are orbit equivalent and of the theorem of Connes-Feldman-Weiss identifying amenability and hyperfiniteness for non-singular equivalence relations. The presentation here is often influenced by descriptive set theory, and Borel and generic analogs of various results are discussed. The final chapter is a detailed account of Gaboriau's recent results on the theory of costs for equivalence relations and groups and its applications to proving rigidity theorems for actions of free groups.

  16. Topics in atomic physics

    CERN Document Server

    Burkhardt, Charles E

    2006-01-01

    The study of atomic physics propelled us into the quantum age in the early twentieth century and carried us into the twenty-first century with a wealth of new and, in some cases, unexplained phenomena. Topics in Atomic Physics provides a foundation for students to begin research in modern atomic physics. It can also serve as a reference because it contains material that is not easily located in other sources. A distinguishing feature is the thorough exposition of the quantum mechanical hydrogen atom using both the traditional formulation and an alternative treatment not usually found in textbooks. The alternative treatment exploits the preeminent nature of the pure Coulomb potential and places the Lenz vector operator on an equal footing with other operators corresponding to classically conserved quantities. A number of difficult to find proofs and derivations are included as is development of operator formalism that permits facile solution of the Stark effect in hydrogen. Discussion of the classical hydrogen...

  17. The company's mainframes join CERN's openlab for DataGrid apps and are pivotal in a new $22 million Supercomputer in the U.K.

    CERN Multimedia

    2002-01-01

    Hewlett-Packard has installed a supercomputer system valued at more than $22 million at the Wellcome Trust Sanger Institute (WTSI) in the U.K. HP has also joined the CERN openlab for DataGrid applications (1 page).

  18. Research center Juelich to install Germany's most powerful supercomputer new IBM System for science and research will achieve 5.8 trillion computations per second

    CERN Multimedia

    2002-01-01

    "The Research Center Juelich, Germany, and IBM today announced that they have signed a contract for the delivery and installation of a new IBM supercomputer at the Central Institute for Applied Mathematics" (1/2 page).

  19. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    Science.gov (United States)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  20. A Parallel Supercomputer Implementation of a Biological Inspired Neural Network and its use for Pattern Recognition

    Science.gov (United States)

    de Ladurantaye, Vincent; Lavoie, Jean; Bergeron, Jocelyn; Parenteau, Maxime; Lu, Huizhong; Pichevar, Ramin; Rouat, Jean

    2012-02-01

    A parallel implementation of a large spiking neural network is proposed and evaluated. The neural network implements the binding by synchrony process using the Oscillatory Dynamic Link Matcher (ODLM). Scalability, speed and performance are compared for 2 implementations: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) running on clusters of multicore supercomputers and NVIDIA graphical processing units respectively. A global spiking list that represents at each instant the state of the neural network is described. This list indexes each neuron that fires during the current simulation time so that the influence of their spikes are simultaneously processed on all computing units. Our implementation shows a good scalability for very large networks. A complex and large spiking neural network has been implemented in parallel with success, thus paving the road towards real-life applications based on networks of spiking neurons. MPI offers a better scalability than CUDA, while the CUDA implementation on a GeForce GTX 285 gives the best cost to performance ratio. When running the neural network on the GTX 285, the processing speed is comparable to the MPI implementation on RQCHP's Mammouth parallel with 64 notes (128 cores).

  1. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Helland, B. [Ames Lab., IA (United States); Summers, B.G. [Oak Ridge National Lab., TN (United States)

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. The data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.

  2. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Swaminarayan, Sriram [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Kadau, Kai [Los Alamos National Laboratory; Fossum, Gordon C [IBM CORPORATION

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  3. Cell-based Adaptive Mesh Refinement on the GPU with Applications to Exascale Supercomputing

    Science.gov (United States)

    Trujillo, Dennis; Robey, Robert; Davis, Neal; Nicholaeff, David

    2011-10-01

    We present an OpenCL implementation of a cell-based adaptive mesh refinement (AMR) scheme for the shallow water equations. The challenges associated with ensuring the locality of algorithm architecture to fully exploit the massive number of parallel threads on the GPU is discussed. This includes a proof of concept that a cell-based AMR code can be effectively implemented, even on a small scale, in the memory and threading model provided by OpenCL. Additionally, the program requires dynamic memory in order to properly implement the mesh; as this is not supported in the OpenCL 1.1 standard, a combination of CPU memory management and GPU computation effectively implements a dynamic memory allocation scheme. Load balancing is achieved through a new stencil-based implementation of a space-filling curve, eliminating the need for a complete recalculation of the indexing on the mesh. A cartesian grid hash table scheme to allow fast parallel neighbor accesses is also discussed. Finally, the relative speedup of the GPU-enabled AMR code is compared to the original serial version. We conclude that parallelization using the GPU provides significant speedup for typical numerical applications and is feasible for scientific applications in the next generation of supercomputing.

  4. Distributed computing as a virtual supercomputer: Tools to run and manage large-scale BOINC simulations

    Science.gov (United States)

    Giorgino, Toni; Harvey, M. J.; de Fabritiis, Gianni

    2010-08-01

    Distributed computing (DC) projects tackle large computational problems by exploiting the donated processing power of thousands of volunteered computers, connected through the Internet. To efficiently employ the computational resources of one of world's largest DC efforts, GPUGRID, the project scientists require tools that handle hundreds of thousands of tasks which run asynchronously and generate gigabytes of data every day. We describe RBoinc, an interface that allows computational scientists to embed the DC methodology into the daily work-flow of high-throughput experiments. By extending the Berkeley Open Infrastructure for Network Computing (BOINC), the leading open-source middleware for current DC projects, with mechanisms to submit and manage large-scale distributed computations from individual workstations, RBoinc turns distributed grids into cost-effective virtual resources that can be employed by researchers in work-flows similar to conventional supercomputers. The GPUGRID project is currently using RBoinc for all of its in silico experiments based on molecular dynamics methods, including the determination of binding free energies and free energy profiles in all-atom models of biomolecules.

  5. A user-friendly web portal for T-Coffee on supercomputers

    Directory of Open Access Journals (Sweden)

    Koetsier Jos

    2011-05-01

    Full Text Available Abstract Background Parallel T-Coffee (PTC was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  6. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan; Mills, Richard T.

    2012-04-18

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors per realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.

  7. A user-friendly web portal for T-Coffee on supercomputers.

    Science.gov (United States)

    Rius, Josep; Cores, Fernando; Solsona, Francesc; van Hemert, Jano I; Koetsier, Jos; Notredame, Cedric

    2011-05-12

    Parallel T-Coffee (PTC) was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  8. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  9. Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society

    Science.gov (United States)

    Kennedy, J. A.; Kluth, S.; Mazzaferro, L.; Walker, Rodney

    2015-12-01

    The possible usage of HPC resources by ATLAS is now becoming viable due to the changing nature of these systems and it is also very attractive due to the need for increasing amounts of simulated data. In recent years the architecture of HPC systems has evolved, moving away from specialized monolithic systems, to a more generic linux type platform. This change means that the deployment of non HPC specific codes has become much easier. The timing of this evolution perfectly suits the needs of ATLAS and opens a new window of opportunity. The ATLAS experiment at CERN will begin a period of high luminosity data taking in 2015. This high luminosity phase will be accompanied by a need for increasing amounts of simulated data which is expected to exceed the capabilities of the current Grid infrastructure. ATLAS aims to address this need by opportunistically accessing resources such as cloud and HPC systems. This paper presents the results of a pilot project undertaken by ATLAS and the MPP/RZG to provide access to the HYDRA supercomputer facility. Hydra is the supercomputer of the Max Planck Society, it is a linux based supercomputer with over 80000 cores and 4000 physical nodes located at the RZG near Munich. This paper describes the work undertaken to integrate Hydra into the ATLAS production system by using the Nordugrid ARC-CE and other standard Grid components. The customization of these components and the strategies for HPC usage are discussed as well as possibilities for future directions.

  10. Topics in statistical mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Elser, V.

    1984-05-01

    This thesis deals with four independent topics in statistical mechanics: (1) the dimer problem is solved exactly for a hexagonal lattice with general boundary using a known generating function from the theory of partitions. It is shown that the leading term in the entropy depends on the shape of the boundary; (2) continuum models of percolation and self-avoiding walks are introduced with the property that their series expansions are sums over linear graphs with intrinsic combinatorial weights and explicit dimension dependence; (3) a constrained SOS model is used to describe the edge of a simple cubic crystal. Low and high temperature results are derived as well as the detailed behavior near the crystal facet; (4) the microscopic model of the lambda-transition involving atomic permutation cycles is reexamined. In particular, a new derivation of the two-component field theory model of the critical behavior is presented. Results for a lattice model originally proposed by Kikuchi are extended with a high temperature series expansion and Monte Carlo simulation. 30 references.

  11. Advanced verification topics

    CERN Document Server

    Bhattacharya, Bishnupriya; Hall, Gary; Heaton, Nick; Kashai, Yaron; Khan Neyaz; Kirshenbaum, Zeev; Shneydor, Efrat

    2011-01-01

    The Accellera Universal Verification Methodology (UVM) standard is architected to scale, but verification is growing and in more than just the digital design dimension. It is growing in the SoC dimension to include low-power and mixed-signal and the system integration dimension to include multi-language support and acceleration. These items and others all contribute to the quality of the SOC so the Metric-Driven Verification (MDV) methodology is needed to unify it all into a coherent verification plan. This book is for verification engineers and managers familiar with the UVM and the benefits it brings to digital verification but who also need to tackle specialized tasks. It is also written for the SoC project manager that is tasked with building an efficient worldwide team. While the task continues to become more complex, Advanced Verification Topics describes methodologies outside of the Accellera UVM standard, but that build on it, to provide a way for SoC teams to stay productive and profitable.

  12. Superconcentration and related topics

    CERN Document Server

    Chatterjee, Sourav

    2014-01-01

    A certain curious feature of random objects, introduced by the author as “super concentration,” and two related topics, “chaos” and “multiple valleys,” are highlighted in this book. Although super concentration has established itself as a recognized feature in a number of areas of probability theory in the last twenty years (under a variety of names), the author was the first to discover and explore its connections with chaos and multiple valleys. He achieves a substantial degree of simplification and clarity in the presentation of these findings by using the spectral approach. Understanding the fluctuations of random objects is one of the major goals of probability theory and a whole subfield of probability and analysis, called concentration of measure, is devoted to understanding these fluctuations. This subfield offers a range of tools for computing upper bounds on the orders of fluctuations of very complicated random variables. Usually, concentration of measure is useful when more direct prob...

  13. Discovering health topics in social media using topic models.

    Directory of Open Access Journals (Sweden)

    Michael J Paul

    Full Text Available By aggregating self-reported health statuses across millions of users, we seek to characterize the variety of health information discussed in Twitter. We describe a topic modeling framework for discovering health topics in Twitter, a social media website. This is an exploratory approach with the goal of understanding what health topics are commonly discussed in social media. This paper describes in detail a statistical topic model created for this purpose, the Ailment Topic Aspect Model (ATAM, as well as our system for filtering general Twitter data based on health keywords and supervised classification. We show how ATAM and other topic models can automatically infer health topics in 144 million Twitter messages from 2011 to 2013. ATAM discovered 13 coherent clusters of Twitter messages, some of which correlate with seasonal influenza (r = 0.689 and allergies (r = 0.810 temporal surveillance data, as well as exercise (r =  .534 and obesity (r =  -.631 related geographic survey data in the United States. These results demonstrate that it is possible to automatically discover topics that attain statistically significant correlations with ground truth data, despite using minimal human supervision and no historical data to train the model, in contrast to prior work. Additionally, these results demonstrate that a single general-purpose model can identify many different health topics in social media.

  14. Discovering health topics in social media using topic models.

    Science.gov (United States)

    Paul, Michael J; Dredze, Mark

    2014-01-01

    By aggregating self-reported health statuses across millions of users, we seek to characterize the variety of health information discussed in Twitter. We describe a topic modeling framework for discovering health topics in Twitter, a social media website. This is an exploratory approach with the goal of understanding what health topics are commonly discussed in social media. This paper describes in detail a statistical topic model created for this purpose, the Ailment Topic Aspect Model (ATAM), as well as our system for filtering general Twitter data based on health keywords and supervised classification. We show how ATAM and other topic models can automatically infer health topics in 144 million Twitter messages from 2011 to 2013. ATAM discovered 13 coherent clusters of Twitter messages, some of which correlate with seasonal influenza (r = 0.689) and allergies (r = 0.810) temporal surveillance data, as well as exercise (r =  .534) and obesity (r =  -.631) related geographic survey data in the United States. These results demonstrate that it is possible to automatically discover topics that attain statistically significant correlations with ground truth data, despite using minimal human supervision and no historical data to train the model, in contrast to prior work. Additionally, these results demonstrate that a single general-purpose model can identify many different health topics in social media.

  15. KEY TOPICS IN SPORTS MEDICINE

    Directory of Open Access Journals (Sweden)

    Amir Ali Narvani

    2006-12-01

    Full Text Available Key Topics in Sports Medicine is a single quick reference source for sports and exercise medicine. It presents the essential information from across relevant topic areas, and includes both the core and emerging issues in this rapidly developing field. It covers: 1 Sports injuries, rehabilitation and injury prevention, 2 Exercise physiology, fitness testing and training, 3 Drugs in sport, 4 Exercise and health promotion, 5 Sport and exercise for special and clinical populations, 6 The psychology of performance and injury. PURPOSE The Key Topics format provides extensive, concise information in an accessible, easy-to-follow manner. AUDIENCE The book is targeted the students and specialists in sports medicine and rehabilitation, athletic training, physiotherapy and orthopaedic surgery. The editors are authorities in their respective fields and this handbook depends on their extensive experience and knowledge accumulated over the years. FEATURES The book contains the information for clinical guidance, rapid access to concise details and facts. It is composed of 99 topics which present the information in an order that is considered logical and progressive as in most texts. Chapter headings are: 1. Functional Anatomy, 2. Training Principles / Development of Strength and Power, 3. Biomechanical Principles, 4. Biomechanical Analysis, 5. Physiology of Training, 6. Monitoring of Training Progress, 7. Nutrition, 8. Hot and Cold Climates, 9. Altitude, 10. Sport and Travelling, 11. Principles of Sport Injury Diagnosis, 12. Principles of Sport and Soft Tissue Management, 13. Principles of Physical Therapy and Rehabilitation, 14. Principles of Sport Injury Prevention, 15. Sports Psychology, 16. Team Sports, 17. Psychological Aspects of Injury in Sport, 18. Injury Repair Process, 19. Basic Biomechanics of Tissue Injury, 20. Plain Film Radiography in Sport, 21. Nuclear Medicine, 22. Diagnostic Ultrasound, 23. MRI Scan, 24. Other Imaging, 5. Head Injury, 26. Eye

  16. Influence of Earth crust composition on continental collision style in Precambrian conditions: Results of supercomputer modelling

    Science.gov (United States)

    Zavyalov, Sergey; Zakharov, Vladimir

    2016-04-01

    A number of issues concerning Precambrian geodynamics still remain unsolved because of uncertainity of many physical (thermal regime, lithosphere thickness, crust thickness, etc.) and chemical (mantle composition, crust composition) parameters, which differed considerably comparing to the present day values. In this work, we show results of numerical supercomputations based on petrological and thermomechanical 2D model, which simulates the process of collision between two continental plates, each 80-160 km thick, with various convergence rates ranging from 5 to 15 cm/year. In the model, the upper mantle temperature is 150-200 ⁰C higher than the modern value, while the continental crust radiogenic heat production is higher than the present value by the factor of 1.5. These settings correspond to Archean conditions. The present study investigates the dependence of collision style on various continental crust parameters, especially on crust composition. The 3 following archetypal settings of continental crust composition are examined: 1) completely felsic continental crust; 2) basic lower crust and felsic upper crust; 3) basic upper crust and felsic lower crust (hereinafter referred to as inverted crust). Modeling results show that collision with completely felsic crust is unlikely. In the case of basic lower crust, a continental subduction and subsequent continental rocks exhumation can take place. Therefore, formation of ultra-high pressure metamorphic rocks is possible. Continental subduction also occurs in the case of inverted continental crust. However, in the latter case, the exhumation of felsic rocks is blocked by upper basic layer and their subsequent interaction depends on their volume ratio. Thus, if the total inverted crust thickness is about 15 km and the thicknesses of the two layers are equal, felsic rocks cannot be exhumed. If the total thickness is 30 to 40 km and that of the felsic layer is 20 to 25 km, it breaks through the basic layer leading to

  17. Topics in dynamics

    CERN Document Server

    Nelson, Edward

    2015-01-01

    Kinematical problems of both classical and quantum mechanics are considered in these lecture notes ranging from differential calculus to the application of one of Chernoff's theorems. Originally published in 1970. The Princeton Legacy Library uses the latest print-on-demand technology to again make available previously out-of-print books from the distinguished backlist of Princeton University Press. These paperback editions preserve the original texts of these important books while presenting them in durable paperback editions. The goal of the Princeton Legacy Library is to vastly increase

  18. Topics of Evolutionary Computation 2001

    DEFF Research Database (Denmark)

    Ursem, Rasmus Kjær

    This booklet contains the student reports from the course: Topics of Evolutionary Computation, Fall 2001, given by Thiemo Krink, Rene Thomsen and Rasmus K. Ursem......This booklet contains the student reports from the course: Topics of Evolutionary Computation, Fall 2001, given by Thiemo Krink, Rene Thomsen and Rasmus K. Ursem...

  19. The topical treatment of psoriasis.

    NARCIS (Netherlands)

    Kerkhof, P.C.M. van de; Vissers, W.H.P.M.

    2003-01-01

    According to the patients, improvement of efficacy, long-term safety and improvement of compliance are needed. The topical treatment has been innovated during the last decade. Most important are the introduction of two new classes of treatments: topical vitamin D(3) analogues and the retinoid tazaro

  20. Programmable lithography engine (ProLE) grid-type supercomputer and its applications

    Science.gov (United States)

    Petersen, John S.; Maslow, Mark J.; Gerold, David J.; Greenway, Robert T.

    2003-06-01

    There are many variables that can affect lithographic dependent device yield. Because of this, it is not enough to make optical proximity corrections (OPC) based on the mask type, wavelength, lens, illumination-type and coherence. Resist chemistry and physics along with substrate, exposure, and all post-exposure processing must be considered too. Only a holistic approach to finding imaging solutions will accelerate yield and maximize performance. Since experiments are too costly in both time and money, accomplishing this takes massive amounts of accurate simulation capability. Our solution is to create a workbench that has a set of advanced user applications that utilize best-in-class simulator engines for solving litho-related DFM problems using distributive computing. Our product, ProLE (Programmable Lithography Engine), is an integrated system that combines Petersen Advanced Lithography Inc."s (PAL"s) proprietary applications and cluster management software wrapped around commercial software engines, along with optional commercial hardware and software. It uses the most rigorous lithography simulation engines to solve deep sub-wavelength imaging problems accurately and at speeds that are several orders of magnitude faster than current methods. Specifically, ProLE uses full vector thin-mask aerial image models or when needed, full across source 3D electromagnetic field simulation to make accurate aerial image predictions along with calibrated resist models;. The ProLE workstation from Petersen Advanced Lithography, Inc., is the first commercial product that makes it possible to do these intensive calculations at a fraction of a time previously available thus significantly reducing time to market for advance technology devices. In this work, ProLE is introduced, through model comparison to show why vector imaging and rigorous resist models work better than other less rigorous models, then some applications of that use our distributive computing solution are shown

  1. Topic modelling in the information warfare domain

    CSIR Research Space (South Africa)

    De Waal, A

    2013-11-01

    Full Text Available In this paper the authors provide context to Topic Modelling as an Information Warfare technique. Topic modelling is a technique that discovers latent topics in unstructured and unlabelled collection of documents. The topic structure can be searched...

  2. Recent advances in topical anesthesia

    Science.gov (United States)

    2016-01-01

    Topical anesthetics act on the peripheral nerves and reduce the sensation of pain at the site of application. In dentistry, they are used to control local pain caused by needling, placement of orthodontic bands, the vomiting reflex, oral mucositis, and rubber-dam clamp placement. Traditional topical anesthetics contain lidocaine or benzocaine as active ingredients and are used in the form of solutions, creams, gels, and sprays. Eutectic mixtures of local anesthesia cream, a mixture of various topical anesthetics, has been reported to be more potent than other anesthetics. Recently, new products with modified ingredients and application methods have been introduced into the market. These products may be used for mild pain during periodontal treatment, such as scaling. Dentists should be aware that topical anesthetics, although rare, might induce allergic reactions or side effects as a result of an overdose. Topical anesthetics are useful aids during dental treatment, as they reduce dental phobia, especially in children, by mitigating discomfort and pain. PMID:28879311

  3. An Emerging Era in Topical Delivery: Organogels

    Directory of Open Access Journals (Sweden)

    Sreedevi.T

    2012-06-01

    Full Text Available Semisolid preparations for external application to skin have gained much demand, since it is easily absorbed through the skin layers. Many novel topical dosage forms have been discovered, among which organogels appears to play an important role. Organogels are thermodynamically stable, biocompatible, isotropic gel, which not only give localised effect, but also systemic effect through percutaneous absorption. Although different types of gelator molecules are being used for the development of organogels, both egg and soya lecithin are mainly focused. The purity of lecithin is also considered to be an important factor in gelation. Apart from lecithin, non-ionic surfactant based microemulsion gels and pluronic organogels are also being developed. Compared to conventional topical dosage forms, these novel formulations are found to be more advantageous and efficient. In future, organogels can give way to many promising discoveries in the field of topical dosage forms. The current review aims at giving an idea about organogels, its applications and importance in topical delivery.

  4. The PVM (Parallel Virtual Machine) system: Supercomputer level concurrent computation on a network of IBM RS/6000 power stations

    Energy Technology Data Exchange (ETDEWEB)

    Sunderam, V.S. (Emory Univ., Atlanta, GA (USA). Dept. of Mathematics and Computer Science); Geist, G.A. (Oak Ridge National Lab., TN (USA))

    1991-01-01

    The PVM (Parallel Virtual Machine) system enables supercomputer level concurrent computations to be performed on interconnected networks of heterogeneous computer systems. Specifically, a network of 13 IBM RS/6000 powerstations has been successfully used to execute production quality runs of superconductor modeling codes at more than 250 Mflops. This work demonstrates the effectiveness of cooperative concurrent processing for high performance applications, and shows that supercomputer level computations may be attained at a fraction of the cost on distributed computing platforms. This paper describes the PVM programming environment and user facilities, as they apply to hardware platforms comprising a network of IBM RS/6000 powerstations. The salient design features of PVM will be discussed; including heterogeneity, scalability, multilanguage support, provisions for fault tolerance, the use of multiprocessors and scalar machines, an interactive graphical front end, and support for profiling, tracing, and visual analysis. The PVM system has been used extensively, and a range of production quality concurrent applications have been successfully executed using PVM on a variety of networked platforms. The paper will mention representative examples, and discuss two in detail. The first is a material sciences problem that was originally developed on a Cray 2. This application code calculates the electronic structure of metallic alloys from first principles and is based on the KKR-CPA algorithm. The second is a molecular dynamics simulation for calculating materials properties. Performance results for both applicants on networks of RS/6000 powerstations will be presented, and accompanied by discussions of the other advantages of PVM and its potential as a complement or alternative to conventional supercomputers.

  5. Calculation of Free Energy Landscape in Multi-Dimensions with Hamiltonian-Exchange Umbrella Sampling on Petascale Supercomputer.

    Science.gov (United States)

    Jiang, Wei; Luo, Yun; Maragliano, Luca; Roux, Benoît

    2012-11-13

    An extremely scalable computational strategy is described for calculations of the potential of mean force (PMF) in multidimensions on massively distributed supercomputers. The approach involves coupling thousands of umbrella sampling (US) simulation windows distributed to cover the space of order parameters with a Hamiltonian molecular dynamics replica-exchange (H-REMD) algorithm to enhance the sampling of each simulation. In the present application, US/H-REMD is carried out in a two-dimensional (2D) space and exchanges are attempted alternatively along the two axes corresponding to the two order parameters. The US/H-REMD strategy is implemented on the basis of parallel/parallel multiple copy protocol at the MPI level, and therefore can fully exploit computing power of large-scale supercomputers. Here the novel technique is illustrated using the leadership supercomputer IBM Blue Gene/P with an application to a typical biomolecular calculation of general interest, namely the binding of calcium ions to the small protein Calbindin D9k. The free energy landscape associated with two order parameters, the distance between the ion and its binding pocket and the root-mean-square deviation (rmsd) of the binding pocket relative the crystal structure, was calculated using the US/H-REMD method. The results are then used to estimate the absolute binding free energy of calcium ion to Calbindin D9k. The tests demonstrate that the 2D US/H-REMD scheme greatly accelerates the configurational sampling of the binding pocket, thereby improving the convergence of the potential of mean force calculation.

  6. Semantic Segmentation with Same Topic Constraints

    Directory of Open Access Journals (Sweden)

    Ling Mao

    2013-02-01

    Full Text Available A popular approach to semantic segmentation problems is to construct a pair wise Conditional Markov Random Field (CRF over image pixels where the pair wise term encodes a preference for smoothness within pixel neighborhoods. Recently, researchers have considered higher-order models that encode local region or soft non-local constraints (e.g., label consistency or co-occurrence statistics. These new models with higher-order terms have significantly pushed the state-of-the-art for semantic segmentation problems. In this study, we consider a novel non-local constraint that enforces consistent pixel labels among those image regions having the same topic. These topics are discovered by Probabilistic Latent Semantic Analysis model (PLSA. We encode this constraint as a robust Pn higher-order potential among all the image regions of the same topic in a unified CRF model. We experimentally demonstrate quantitative and qualitative improvements over a refined baseline unary and pair wise CRF models.

  7. Mental Mechanisms for Topics Identification

    Directory of Open Access Journals (Sweden)

    Louis Massey

    2014-01-01

    Full Text Available Topics identification (TI is the process that consists in determining the main themes present in natural language documents. The current TI modeling paradigm aims at acquiring semantic information from statistic properties of large text datasets. We investigate the mental mechanisms responsible for the identification of topics in a single document given existing knowledge. Our main hypothesis is that topics are the result of accumulated neural activation of loosely organized information stored in long-term memory (LTM. We experimentally tested our hypothesis with a computational model that simulates LTM activation. The model assumes activation decay as an unavoidable phenomenon originating from the bioelectric nature of neural systems. Since decay should negatively affect the quality of topics, the model predicts the presence of short-term memory (STM to keep the focus of attention on a few words, with the expected outcome of restoring quality to a baseline level. Our experiments measured topics quality of over 300 documents with various decay rates and STM capacity. Our results showed that accumulated activation of loosely organized information was an effective mental computational commodity to identify topics. It was furthermore confirmed that rapid decay is detrimental to topics quality but that limited capacity STM restores quality to a baseline level, even exceeding it slightly.

  8. Mental mechanisms for topics identification.

    Science.gov (United States)

    Massey, Louis

    2014-01-01

    Topics identification (TI) is the process that consists in determining the main themes present in natural language documents. The current TI modeling paradigm aims at acquiring semantic information from statistic properties of large text datasets. We investigate the mental mechanisms responsible for the identification of topics in a single document given existing knowledge. Our main hypothesis is that topics are the result of accumulated neural activation of loosely organized information stored in long-term memory (LTM). We experimentally tested our hypothesis with a computational model that simulates LTM activation. The model assumes activation decay as an unavoidable phenomenon originating from the bioelectric nature of neural systems. Since decay should negatively affect the quality of topics, the model predicts the presence of short-term memory (STM) to keep the focus of attention on a few words, with the expected outcome of restoring quality to a baseline level. Our experiments measured topics quality of over 300 documents with various decay rates and STM capacity. Our results showed that accumulated activation of loosely organized information was an effective mental computational commodity to identify topics. It was furthermore confirmed that rapid decay is detrimental to topics quality but that limited capacity STM restores quality to a baseline level, even exceeding it slightly.

  9. APT accelerator. Topical report

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence, G.; Rusthoi, D. [comp.] [ed.

    1995-03-01

    The Accelerator Production of Tritium (APT) project, sponsored by Department of Energy Defense Programs (DOE/DP), involves the preconceptual design of an accelerator system to produce tritium for the nation`s stockpile of nuclear weapons. Tritium is an isotope of hydrogen used in nuclear weapons, and must be replenished because of radioactive decay (its half-life is approximately 12 years). Because the annual production requirements for tritium has greatly decreased since the end of the Cold War, an alternative approach to reactors for tritium production, based on a linear accelerator, is now being seriously considered. The annual tritium requirement at the time this study was undertaken (1992-1993) was 3/8 that of the 1988 goal, usually stated as 3/8-Goal. Continued reduction in the number of weapons in the stockpile has led to a revised (lower) production requirement today (March, 1995). The production requirement needed to maintain the reduced stockpile, as stated in the recent Nuclear Posture Review (summer 1994) is approximately 3/16-Goal, half the previous level. The Nuclear Posture Review also requires that the production plant be designed to accomodate a production increase (surge) to 3/8-Goal capability within five years, to allow recovery from a possible extended outage of the tritium plant. A multi-laboratory team, collaborating with several industrial partners, has developed a preconceptual APT design for the 3/8-Goal, operating at 75% capacity. The team has presented APT as a promising alternative to the reactor concepts proposed for Complex-21. Given the requirements of a reduced weapons stockpile, APT offers both significant safety, environmental, and production-fexibility advantages in comparison with reactor systems, and the prospect of successful development in time to meet the US defense requirements of the 21st Century.

  10. Scalable geocomputation: evolving an environmental model building platform from single-core to supercomputers

    Science.gov (United States)

    Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek

    2017-04-01

    There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework (http://www.pcraster.eu) providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using

  11. Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer.

    Science.gov (United States)

    Hines, Michael; Kumar, Sameer; Schürmann, Felix

    2011-01-01

    For neural network simulations on parallel machines, interprocessor spike communication can be a significant portion of the total simulation time. The performance of several spike exchange methods using a Blue Gene/P (BG/P) supercomputer has been tested with 8-128 K cores using randomly connected networks of up to 32 M cells with 1 k connections per cell and 4 M cells with 10 k connections per cell, i.e., on the order of 4·10(10) connections (K is 1024, M is 1024(2), and k is 1000). The spike exchange methods used are the standard Message Passing Interface (MPI) collective, MPI_Allgather, and several variants of the non-blocking Multisend method either implemented via non-blocking MPI_Isend, or exploiting the possibility of very low overhead direct memory access (DMA) communication available on the BG/P. In all cases, the worst performing method was that using MPI_Isend due to the high overhead of initiating a spike communication. The two best performing methods-the persistent Multisend method using the Record-Replay feature of the Deep Computing Messaging Framework DCMF_Multicast; and a two-phase multisend in which a DCMF_Multicast is used to first send to a subset of phase one destination cores, which then pass it on to their subset of phase two destination cores-had similar performance with very low overhead for the initiation of spike communication. Departure from ideal scaling for the Multisend methods is almost completely due to load imbalance caused by the large variation in number of cells that fire on each processor in the interval between synchronization. Spike exchange time itself is negligible since transmission overlaps with computation and is handled by a DMA controller. We conclude that ideal performance scaling will be ultimately limited by imbalance between incoming processor spikes between synchronization intervals. Thus, counterintuitively, maximization of load balance requires that the distribution of cells on processors should not reflect

  12. [Update of the topical treatment of psoriasis].

    Science.gov (United States)

    Carrascosa, J M; Vanaclocha, F; Borrego, L; Fernández-López, E; Fuertes, A; Rodríguez-Fernández-Freire, L; Zulaica, A; Tuneu, A; Caballé, G; Colomé, E; Bordas, X; Hernanz, J M; Brufau, C; Herrera, E

    2009-04-01

    Topical therapy continues to be one of the pillars of psoriasis management. Topical corticosteroids and vitamin D analogs are the drugs of choice during the induction phase, and vitamin D analogs continue to be drugs of choice for maintenance therapy. Tazarotene and dithranol are suitable options in patients with certain, specific characteristics. The calcineurin inhibitors can be considered to be second-line treatment for psoriasis of the face and flexures. The efficacy and safety of the fixed-dose combination of betamethasone and calcipotriol in the induction phase is greater than that of either drug alone. The combination of corticosteroids with salicylic acid achieves better results than corticosteroids in monotherapy. None of the drugs evaluated stands out over the others in all clinical situations, and their use must therefore be individualized in each patient and adjusted according to the course of the disease.

  13. Topics in quantum gravity

    Energy Technology Data Exchange (ETDEWEB)

    Lamon, Raphael

    2010-06-29

    . Furthermore, we succeed in solving the quantum Gauss constraint. In the second part of the thesis we introduce some aspects of phenomenological quantum gravity and their possible detectable signatures. The goal of phenomenological quantum gravity is to derive conclusions and make predictions from expected characteristics of a full theory of quantum gravity. One possibility is an energy-dependent speed of light arising from a quantized space such that the propagation time of two photons differs. However, the amount of these corrections is very small such that only cosmological distances can be considered. Gamma-ray bursts (GRB) are ideal candidates as they are short but very luminous bursts of gamma-rays taking place at distances billions of light-years away. We study GRBs detected by the European satellite INTEGRAL and develop a new method to analyze unbinned data. A {chi}{sup 2}-test will provide a lower bound for quantum gravity corrections, which will be nevertheless well below the Planck mass. Then we shall study the sensibility of NASA's new satellite Fermi Gamma-ray Space Telescope and conclude that it is well suited to detect corrections. This prediction has just been confirmed when Fermi detected a very energetic photon emanating from GRB 090510 which highly constrains models with linear corrections to the speed of light. However, as it is shown at the end of this thesis, more bursts are needed in order to definitely falsify such models. (orig.)

  14. Efinaconazole topical solution, 10%: formulation development program of a new topical treatment of toenail onychomycosis.

    Science.gov (United States)

    Bhatt, V; Pillai, R

    2015-07-01

    Transungual drug delivery of antifungals is considered highly desirable to treat common nail disorders such as onychomycosis, due to localized effects, and improved adherence resulting from minimal systemic adverse events. However, the development of effective topical therapies has been hampered by poor nail penetration. An effective topical antifungal must permeate through, and under the dense keratinized nail plate to the site of infection in the nail bed and nail matrix. We present here the formulation development program to provide effective transungual and subungual delivery of efinaconazole, the first topical broad spectrum triazole specifically developed for onychomycosis treatment. We discuss the important aspects encompassing the formulation development program for efinaconazole topical solution, 10%, focusing on its solubility in a number of solvents, in vitro penetration through the nail, and in vivo efficacy. Efinaconazole topical solution, 10% is a stable, non-lacquer, antifungal with a unique combination of ingredients added to an alcohol-based formulation to provide low surface tension and good wetting properties. This low surface tension is believed to affect effective transungual delivery of efinaconazole and believed to provide a dual mode of delivery by accessing the nail bed by wicking into the space between the nail and nail plate. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  15. Erythromycin and Benzoyl Peroxide Topical

    Science.gov (United States)

    The combination of erythromycin and benzoyl peroxide is used to treat acne. Erythromycin and benzoyl peroxide are in a class of medications called topical antibiotics. The combination of erythromycin ...

  16. Clindamycin and Benzoyl Peroxide Topical

    Science.gov (United States)

    The combination of clindamycin and benzoyl peroxide is used to treat acne. Clindamycin and benzoyl peroxide are in a class of medications called topical antibiotics. The combination of clindamycin ...

  17. Safety and Health Topics: Asbestos

    Science.gov (United States)

    ... Videos E-Tools Safety and Health Topics / Asbestos Asbestos This page requires that javascript be enabled for ... Hazards and Toxic Substances Hazardous Waste What is asbestos? Asbestos is the name given to a group ...

  18. Topics in modern differential geometry

    CERN Document Server

    Verstraelen, Leopold

    2017-01-01

    A variety of introductory articles is provided on a wide range of topics, including variational problems on curves and surfaces with anisotropic curvature. Experts in the fields of Riemannian, Lorentzian and contact geometry present state-of-the-art reviews of their topics. The contributions are written on a graduate level and contain extended bibliographies. The ten chapters are the result of various doctoral courses which were held in 2009 and 2010 at universities in Leuven, Serbia, Romania and Spain.

  19. Linguistic Extensions of Topic Models

    Science.gov (United States)

    2010-09-01

    class of corpora and would help monolingual users to explore and understand multilingual corpora. In this chapter, we develop multilingual LDAWN, an... Multilingual Priors As in the case of a monolingual walk over the concept hierarchy, we want to provide guidance to the multilingual model as to which parts...topic models for new datasets, such as unaligned multilingual corpora, and combine topic models with other sources of information about documents’ context

  20. Topics of Bioengineering in Wikipedia

    Directory of Open Access Journals (Sweden)

    Vassia Atanassova

    2009-10-01

    Full Text Available The present report aims to give a snapshot of how topics from the field of bioengineering (bioinformatics, bioprocess systems, biomedical engineering, biotechnology, etc. are currently covered in the free electronic encyclopedia Wikipedia. It also offers insights and information about what Wikipedia is, how it functions, how and when to cite Wikipedian articles, if necessary. Several external wikis, devoted to topics of bioengineering, are also listed and reviewed.

  1. Proposal of a Desk-Side Supercomputer with Reconfigurable Data-Paths Using Rapid Single-Flux-Quantum Circuits

    Science.gov (United States)

    Takagi, Naofumi; Murakami, Kazuaki; Fujimaki, Akira; Yoshikawa, Nobuyuki; Inoue, Koji; Honda, Hiroaki

    We propose a desk-side supercomputer with large-scale reconfigurable data-paths (LSRDPs) using superconducting rapid single-flux-quantum (RSFQ) circuits. It has several sets of computing unit which consists of a general-purpose microprocessor, an LSRDP and a memory. An LSRDP consists of a lot of, e. g., a few thousand, floating-point units (FPUs) and operand routing networks (ORNs) which connect the FPUs. We reconfigure the LSRDP to fit a computation, i. e., a group of floating-point operations, which appears in a ‘for’ loop of numerical programs by setting the route in ORNs before the execution of the loop. We propose to implement the LSRDPs by RSFQ circuits. The processors and the memories can be implemented by semiconductor technology. We expect that a 10 TFLOPS supercomputer, as well as a refrigerating engine, will be housed in a desk-side rack, using a near-future RSFQ process technology, such as 0.35μm process.

  2. TopicPanorama: A Full Picture of Relevant Topics.

    Science.gov (United States)

    Wang, Xiting; Liu, Shixia; Liu, Junlin; Chen, Jianfei; Zhu, Jun; Guo, Baining

    2016-12-01

    This paper presents a visual analytics approach to analyzing a full picture of relevant topics discussed in multiple sources, such as news, blogs, or micro-blogs. The full picture consists of a number of common topics covered by multiple sources, as well as distinctive topics from each source. Our approach models each textual corpus as a topic graph. These graphs are then matched using a consistent graph matching method. Next, we develop a level-of-detail (LOD) visualization that balances both readability and stability. Accordingly, the resulting visualization enhances the ability of users to understand and analyze the matched graph from multiple perspectives. By incorporating metric learning and feature selection into the graph matching algorithm, we allow users to interactively modify the graph matching result based on their information needs. We have applied our approach to various types of data, including news articles, tweets, and blog data. Quantitative evaluation and real-world case studies demonstrate the promise of our approach, especially in support of examining a topic-graph-based full picture at different levels of detail.

  3. Selected topics of fluid mechanics

    Science.gov (United States)

    Kindsvater, Carl E.

    1958-01-01

    the Euler, Froude, Reynolds, Weber, and Cauchy numbers are defined as essential tools for interpreting and using experimental data. The derivations of the energy and momentum equations are treated in detail. One-dimensional equations for steady nonuniform flow are developed, and the restrictions applicable to the equations are emphasized. Conditions of uniform and gradually varied flow are discussed, and the origin of the Chezy equation is examined in relation to both the energy and the momentum equations. The inadequacy of all uniform-flow equations as a means of describing gradually varied flow is explained. Thus, one of the definitive problems of river hydraulics is analyzed in the light of present knowledge. This report is the outgrowth of a series of short schools conducted during the spring and summer of 1953 for engineers of the Surface Water Branch, Water Resources Division, U. S. Geological Survey. The topics considered are essentially the same as the topics selected for inclusion in the schools. However, in order that they might serve better as a guide and outline for informal study, the arrangement of the writer's original lecture notes has been considerably altered. The purpose of the report, like the purpose of the schools which inspired it, is to build a simple but strong framework of the fundamentals of fluid mechanics. It is believed that this framework is capable of supporting a detailed analysis of most of the practical problems met by the engineers of the Geological Survey. It is hoped that the least accomplishment of this work will be to inspire the reader with the confidence and desire to read more of the recent and current technical literature of modern fluid mechanics.

  4. Discovery of Web Topic-Specific Association Rules%Web主题关联知识自学习算法

    Institute of Scientific and Technical Information of China (English)

    杨沛; 郑启伦; 彭宏

    2003-01-01

    There are hidden and rich information for data mining in the topology of topic-specific websites. A new topic-specific association rules mining algorithm is proposed to further the research on this area. The key idea is to analyze the frequent hyperlinked relati ons between pages of different topics. In the topic-specific area, if pages of onetopic are frequently hyperlinked by pages of another topic, we consider the two topics are relevant. Also, if pages oftwo different topics are frequently hyperlinked together by pages of the other topic, we consider the two topics are relevant.The initial experiments show that this algorithm performs quite well while guiding the topic-specific crawling agent and it can be applied to the further discovery and mining on the topic-specific website.

  5. Safety of topical medications for scabies and lice in pregnancy

    Directory of Open Access Journals (Sweden)

    Viral M Patel

    2016-01-01

    Full Text Available Medications should be employed with caution in women of childbearing age. Topical medications have little systemic absorption. Therefore, they are considered safer than oral or parenteral agents and less likely to be embryotoxic or fetotoxic. However, their safety profile must be assessed cautiously as the available data are limited. In this article, we aggregate human and animal studies to provide recommendations on using topical anti-scabies and anti-lice therapy in pregnancy.

  6. Selecting a Topic in Public Speaking

    Institute of Scientific and Technical Information of China (English)

    刘玉霞

    2013-01-01

    The first step in speechmaking is choosing a topic, which will decide the success of the speech. In the paper, we are go⁃ing to discuss how to select a topic in public speaking, and we’ll mainly talk about the following: the speaker, the occasion and the audience, selecting a topic, specific methods for choosing a topic and available topics.

  7. Topical agents in burn care

    Directory of Open Access Journals (Sweden)

    Momčilović Dragan

    2002-01-01

    Full Text Available Introduction Understanding of fluid shifts and recognition of the importance of early and appropriate fluid replacement therapy have significantly reduced mortality in the early post burn period. After the bum patient successfully passes the resuscitation period, the burn wound represents the greatest threat to survival. History Since the dawn of civilization, man has been trying to find an agent which would help burn wounds heal, and at the same time, not harm general condition of the injured. It was not until the XX century, after the discovery of antibiotics, when this condition was fulfilled. In 1968, combining silver and sulfadiazine, fox made silver-sulfadiazine, which is a 1% hydro-soluble cream and a superior agent in topical treatment of burns today. Current topical agents None of the topical antimicrobial agents available today, alone or combined, have the characteristics of ideal prophylactic agents, but they eliminate colonization of burn wound, and invasive infections are infrequent. With an excellent spectrum of activity, low toxicity, and ease of application with minimal pain, silver-sulfadiazine is still the most frequently used topical agent. Conclusion The incidence of invasive infections and overall mortality have been significantly reduced after introduction of topical burn wound antimicrobial agents into practice. In most burn patients the drug of choice for prophylaxis is silver sulfadiazine. Other agents may be useful in certain clinical situations.

  8. Quantum mechanics II advanced topics

    CERN Document Server

    Rajasekar, S

    2015-01-01

    Quantum Mechanics II: Advanced Topics uses more than a decade of research and the authors’ own teaching experience to expound on some of the more advanced topics and current research in quantum mechanics. A follow-up to the authors introductory book Quantum Mechanics I: The Fundamentals, this book begins with a chapter on quantum field theory, and goes on to present basic principles, key features, and applications. It outlines recent quantum technologies and phenomena, and introduces growing topics of interest in quantum mechanics. The authors describe promising applications that include ghost imaging, detection of weak amplitude objects, entangled two-photon microscopy, detection of small displacements, lithography, metrology, and teleportation of optical images. They also present worked-out examples and provide numerous problems at the end of each chapter.

  9. Topical corticosteroid addiction and phobia

    Directory of Open Access Journals (Sweden)

    Aparajita Ghosh

    2014-01-01

    Full Text Available Corticosteroids, one of the most widely prescribed topical drugs, have been used for about six decades till date. However, rampant misuse and abuse down the years has given the drug a bad name. Topical steroid abuse may lead to two major problems which lie at the opposing ends of the psychosomatic spectrum. Topical steroid addiction, a phenomenon that came to be recognized about a decade after the introduction of the molecule is manifested as psychological distress and rebound phenomenon on stoppage of the drug. The rebound phenomenon, which can affect various parts of the body particularly the face and the genitalia has been reported by various names in the literature. TC phobia which lies at the opposite end of the psychiatric spectrum of steroid abuse has been reported particularly among parents of atopic children. Management of both conditions is difficult and frustrating. Psychological counseling and support can be of immense help in both the conditions.

  10. Considering PTSD for DSM-5.

    Science.gov (United States)

    Friedman, Matthew J; Resick, Patricia A; Bryant, Richard A; Brewin, Chris R

    2011-09-01

    This is a review of the relevant empirical literature concerning the DSM-IV-TR diagnostic criteria for PTSD. Most of this work has focused on Criteria A1 and A2, the two components of the A (Stressor) Criterion. With regard to A1, the review considers: (a) whether A1 is etiologically or temporally related to the PTSD symptoms; (b) whether it is possible to distinguish "traumatic" from "non-traumatic" stressors; and (c) whether A1 should be eliminated from DSM-5. Empirical literature regarding the utility of the A2 criterion indicates that there is little support for keeping the A2 criterion in DSM-5. The B (reexperiencing), C (avoidance/numbing) and D (hyperarousal) criteria are also reviewed. Confirmatory factor analyses suggest that the latent structure of PTSD appears to consist of four distinct symptom clusters rather than the three-cluster structure found in DSM-IV. It has also been shown that in addition to the fear-based symptoms emphasized in DSM-IV, traumatic exposure is also followed by dysphoric, anhedonic symptoms, aggressive/externalizing symptoms, guilt/shame symptoms, dissociative symptoms, and negative appraisals about oneself and the world. A new set of diagnostic criteria is proposed for DSM-5 that: (a) attempts to sharpen the A1 criterion; (b) eliminates the A2 criterion; (c) proposes four rather than three symptom clusters; and (d) expands the scope of the B-E criteria beyond a fear-based context. The final sections of this review consider: (a) partial/subsyndromal PTSD; (b) disorders of extreme stress not otherwise specified (DESNOS)/complex PTSD; (c) cross- cultural factors; (d) developmental factors; and (e) subtypes of PTSD. © 2010 Wiley-Liss, Inc.

  11. Therapeutic Effect of 0.1% Topical Tacrolimus for Childhood Interstitial Keratitis Refractory to Cyclosporine.

    Science.gov (United States)

    Joko, Takeshi; Shiraishi, Atsushi; Ogata, Miki; Ohashi, Yuichi

    2016-01-01

    To report our findings in a case of childhood refractory interstitial keratitis successfully treated with 0.1% topical tacrolimus. A 12-year-old boy presented with a 3-year history of interstitial keratitis. For the recurrent interstitial keratitis he had been treated with topical and systemic acyclovir, steroids, and topical cyclosporine for 3 years. Our examinations revealed severe stromal infiltrates and neovascularization. Treatment was changed from topical 0.5% cyclosporine to topical 0.1% tacrolimus combined with topical acyclovir and betamethasone. After 2 weeks of treatment with topical tacrolimus, the degree of stromal infiltrates decreased. Although the improvements were slow, the stromal infiltrates resolved somewhat, and neovascularization and topical acyclovir and betamethasone were tapered and stopped in 18 months. Since then, the patient has not shown any recurrence for 9 months without medication. Our findings indicate that topical tacrolimus should be considered for treating refractory interstitial keratitis in children.

  12. Topics in current aerosol research

    CERN Document Server

    Hidy, G M

    1971-01-01

    Topics in Current Aerosol Research deals with the fundamental aspects of aerosol science, with emphasis on experiment and theory describing highly dispersed aerosols (HDAs) as well as the dynamics of charged suspensions. Topics covered range from the basic properties of HDAs to their formation and methods of generation; sources of electric charges; interactions between fluid and aerosol particles; and one-dimensional motion of charged cloud of particles. This volume is comprised of 13 chapters and begins with an introduction to the basic properties of HDAs, followed by a discussion on the form

  13. Topical therapies in hyperhidrosis care.

    Science.gov (United States)

    Pariser, David M; Ballard, Angela

    2014-10-01

    Primary focal hyperhidrosis affects 3% of the US population; about the same number as psoriasis. More than half of these patients have primary focal axillary hyperhidrosis: sweating that is beyond what is anticipated or necessary for thermoregulation. Most topical therapies are based on aluminum salts, which work by a chemical reaction that forms plugs in the eccrine sweat ducts. Topical anticholinergics may also be used. Instruction on proper methods and timing of antiperspirants enhances effect and may be effective alone or in combination with other treatments in patients with hyperhidrosis.

  14. Topics in millimeter wave technology

    CERN Document Server

    Button, Kenneth

    1988-01-01

    Topics in Millimeter Wave Technology, Volume 1 presents topics related to millimeter wave technology, including fin-lines and passive components realized in fin-lines, suspended striplines, suspended substrate microstrips, and modal power exchange in multimode fibers. A miniaturized monopulse assembly constructed in planar waveguide with multimode scalar horn feeds is also described. This volume is comprised of five chapters; the first of which deals with the analysis and synthesis techniques for fin-lines as well as the various passive components realized in fin-line. Tapers, discontinuities,

  15. Topical phenytoin for treating pressure ulcers.

    Science.gov (United States)

    Hao, Xiang Yong; Li, Hong Ling; Su, He; Cai, Hui; Guo, Tian Kang; Liu, Ruifeng; Jiang, Lei; Shen, Yan Fei

    2017-02-22

    reduced healing. We therefore considered it to be insufficient to determine the effect of topical phenytoin on ulcer healing. One study compared topical phenytoin with triple antibiotic ointment, however, none of the outcomes of interest to this review were reported. No adverse drug reactions or interactions were detected in any of the three RCTs. Minimal pain was reported in all groups in one trial that compared topical phenytoin with hydrocolloid dressings and triple antibiotic ointment. This review has considered the available evidence and the result shows that it is uncertain whether topical phenytoin improves ulcer healing for patients with grade I and II pressure ulcers. No adverse events were reported from three small trials and minimal pain was reported in one trial. Therefore, further rigorous, adequately powered RCTs examining the effects of topical phenytoin for treating pressure ulcers, and to report on adverse events, quality of life and costs are necessary.

  16. Automatic Labelling of Topics with Neural Embeddings

    OpenAIRE

    Bhatia, Shraey; Lau, Jey Han; Baldwin, Timothy

    2016-01-01

    Topics generated by topic models are typically represented as list of terms. To reduce the cognitive overhead of interpreting these topics for end-users, we propose labelling a topic with a succinct phrase that summarises its theme or idea. Using Wikipedia document titles as label candidates, we compute neural embeddings for documents and words to select the most relevant labels for topics. Compared to a state-of-the-art topic labelling system, our methodology is simpler, more efficient, and ...

  17. Coherent 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an Optimal Supercomputer Optical Switch Fabric

    DEFF Research Database (Denmark)

    Karinou, Fotini; Borkowski, Robert; Zibar, Darko

    2013-01-01

    We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates....

  18. Car2x with software defined networks, network functions virtualization and supercomputers technical and scientific preparations for the Amsterdam Arena telecoms fieldlab

    NARCIS (Netherlands)

    Meijer R.J.; Cushing R.; De Laat C.; Jackson P.; Klous S.; Koning R.; Makkes M.X.; Meerwijk A.

    2015-01-01

    In the invited talk 'Car2x with SDN, NFV and supercomputers' we report about how our past work with SDN [1, 2] allows the design of a smart mobility fieldlab in the huge parking lot the Amsterdam Arena. We explain how we can engineer and test software that handle the complex conditions of the Car2X

  19. Car2x with software defined networks, network functions virtualization and supercomputers technical and scientific preparations for the Amsterdam Arena telecoms fieldlab

    NARCIS (Netherlands)

    Meijer R.J.; Cushing R.; De Laat C.; Jackson P.; Klous S.; Koning R.; Makkes M.X.; Meerwijk A.

    2015-01-01

    In the invited talk 'Car2x with SDN, NFV and supercomputers' we report about how our past work with SDN [1, 2] allows the design of a smart mobility fieldlab in the huge parking lot the Amsterdam Arena. We explain how we can engineer and test software that handle the complex conditions of the Car2X

  20. Evaluating topic models with stability

    CSIR Research Space (South Africa)

    De Waal, A

    2008-11-01

    Full Text Available on unlabelled data, so that a ground truth does not exist and (b) "soft" (probabilistic) document clusters are created by state-of-the-art topic models, which complicates comparisons even when ground truth labels are available. Perplexity has often been used...

  1. The Health Curriculum: 500 Topics.

    Science.gov (United States)

    Byrd, Oliver E.

    2001-01-01

    This 1958 paper divides 500 health topics into 20 categories: health as a social accomplishment/social problem; nutrition; physical fitness; mental health and disease; heredity/eugenics; infection/immunity; chronic and degenerative disease; substance abuse; skin care; vision, hearing, and speech; dental health; safety; physical environment; health…

  2. Seven topics in perturbative QCD

    Energy Technology Data Exchange (ETDEWEB)

    Buras, A.J.

    1980-09-01

    The following topics of perturbative QCD are discussed: (1) deep inelastic scattering; (2) higher order corrections to e/sup +/e/sup -/ annihilation, to photon structure functions and to quarkonia decays; (3) higher order corrections to fragmentation functions and to various semi-inclusive processes; (4) higher twist contributions; (5) exclusive processes; (6) transverse momentum effects; (7) jet and photon physics.

  3. Web mining for topics defined by complex and precise predicates

    Science.gov (United States)

    Lee, Ching-Cheng; Sampathkumar, Sushma

    2004-04-01

    The enormous growth of the World Wide Web has made it important to perform resource discovery efficiently for any given topic. Several new techniques have been proposed in the recent years for this kind of topic specific web-mining, and among them a key new technique called focused crawling which is able to crawl topic-specific portions of the web without having to explore all pages. Most existing research on focused crawling considers a simple topic definition that typically consists of one or more keywords connected by an OR operator. However this kind of simple topic definition may result in too many irrelevant pages in which the same keyword appears in a wrong context. In this research we explore new strategies for crawling topic specific portions of the web using complex and precise predicates. A complex predicate will allow the user to precisely specify a topic using Boolean operators such as "AND", "OR" and "NOT". Our work will concentrate on defining a format to specify this kind of a complex topic definition and secondly on devising a crawl strategy to crawl the topic specific portions of the web defined by the complex predicate, efficiently and with minimal overhead. Our new crawl strategy will improve the performance of topic-specific web crawling by reducing the number of irrelevant pages crawled. In order to demonstrate the effectiveness of the above approach, we have built a complete focused crawler called "Eureka" with complex predicate support, and a search engine that indexes and supports end-user searches on the crawled pages.

  4. The pharmacology of topical analgesics.

    Science.gov (United States)

    Barkin, Robert L

    2013-07-01

    Pain management of patients continues to pose challenges to clinicians. Given the multiple dimensions of pain--whether acute or chronic, mild, moderate, or severe, nociceptive or neuropathic--a multimodal approach may be needed. Fortunately, clinicians have an array of nonpharmacologic and pharmacologic treatment choices; however, each modality must be chosen carefully, because some often used oral agents are associated with safety and tolerability issues that restrict their use in certain patients. In particular, orally administered nonsteroidal antiinflammatory drugs, opioids, antidepressants, and anticonvulsants are known to cause systemic adverse effects in some patients. To address this problem, a number of topical therapies in various therapeutic classes have been developed to reduce systemic exposure and minimize the risks of patients developing adverse events. For example, topical nonsteroidal anti-inflammatory drug formulations produce a site-specific effect (ie, cyclo-oxygenase inhibition) while decreasing the systemic exposure that may lead to undesired effects in patients. Similarly, derivatives of acetylsalicylic acid (ie, salicylates) are used in topical analgesic formulations that do not significantly enter the patient's systemic circulation. Salicylates, along with capsaicin, menthol, and camphor, compose the counterirritant class of topical analgesics, which produce analgesia by activating and then desensitizing epidermal nociceptors. Additionally, patches and creams that contain the local anesthetic lidocaine, alone or co-formulated with other local anesthetics, are also used to manage patients with select acute and chronic pain states. Perhaps the most common topical analgesic modality is the cautious application of cutaneous cold and heat. Such treatments may decrease pain not by reaching the target tissue through systemic distribution, but by acting more directly on the affected tissue. Despite the tolerability benefits associated with avoiding

  5. Nonperturbative Lattice Simulation of High Multiplicity Cross Section Bound in $\\phi^4_3$ on Beowulf Supercomputer

    CERN Document Server

    Charng, Y Y

    2001-01-01

    In this thesis, we have investigated the possibility of large cross sections at large multiplicity in weakly coupled three dimensional $\\phi^4$ theory using Monte Carlo Simulation methods. We have built a Beowulf Supercomputer for this purpose. We use spectral function sum rules to derive a bound on the total cross section where the quantity determining the bound can be measured by Monte Carlo simulation in Euclidean space. We determine the critical threshold energy for large high multiplicity cross section according to the analysis of M.B. Volosion and E.N. Argyres, R.M.P. Kleiss, and C.G. Papadopoulos. We compare the simulation results with the perturbation results and see no evidence for large cross section in the range where tree diagram estimates suggest they should exist.

  6. Performance Evaluation of an Intel Haswell- and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

    Science.gov (United States)

    Saini, Subhash; Hood, Robert T.; Chang, Johnny; Baron, John

    2016-01-01

    We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5- 2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.

  7. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    Science.gov (United States)

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .

  8. How can we enhance girls' interest in scientific topics?

    Science.gov (United States)

    Kerger, Sylvie; Martin, Romain; Brunner, Martin

    2011-12-01

    Girls are considerably less interested in scientific subjects than boys. One reason may be that scientific subjects are considered to be genuinely masculine. Thus, being interested in science may threaten the self-perception of girls as well as the femininity of their self-image. If scientific topics that are considered to be stereotypically feminine were chosen, however, this potential threat might be overcome which, in turn, might lead to an increase in girls' interest in science. This hypothesis was empirically tested by means of two studies. Participants were 294 (Study 1) and 190 (Study 2) Grade 8 to Grade 9 students. Gender differences in students' interest in masculine and feminine topics were investigated for a range of scientific concepts (Study 1) as well as for a given scientific concept (Study 2) for four scientific subjects (i.e., biology, physics, information technology, and statistics), respectively. Both studies indicated that the mean level of girls' scientific interest was higher when scientific concepts were presented in the context of feminine topics and boys' level of scientific interests was higher when scientific concepts were presented in the context of masculine topics. Girls' interest in science could be substantially increased by presenting scientific concepts in the context of feminine topics. Gender differences as well as individual differences in the level of interest in scientific topics may be taken into account by creating learning environments in which students could select the context in which a certain scientific concept is embedded. ©2011 The British Psychological Society.

  9. Injecting Structured Data to Generative Topic Model in Enterprise Settings

    Science.gov (United States)

    Xiao, Han; Wang, Xiaojie; Du, Chao

    Enterprises have accumulated both structured and unstructured data steadily as computing resources improve. However, previous research on enterprise data mining often treats these two kinds of data independently and omits mutual benefits. We explore the approach to incorporate a common type of structured data (i.e. organigram) into generative topic model. Our approach, the Partially Observed Topic model (POT), not only considers the unstructured words, but also takes into account the structured information in its generation process. By integrating the structured data implicitly, the mixed topics over document are partially observed during the Gibbs sampling procedure. This allows POT to learn topic pertinently and directionally, which makes it easy tuning and suitable for end-use application. We evaluate our proposed new model on a real-world dataset and show the result of improved expressiveness over traditional LDA. In the task of document classification, POT also demonstrates more discriminative power than LDA.

  10. LDRD final report : a lightweight operating system for multi-core capability class supercomputers.

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Suzanne Marie; Hudson, Trammell B. (OS Research); Ferreira, Kurt Brian; Bridges, Patrick G. (University of New Mexico); Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.; Brightwell, Ronald Brian

    2010-09-01

    The two primary objectives of this LDRD project were to create a lightweight kernel (LWK) operating system(OS) designed to take maximum advantage of multi-core processors, and to leverage the virtualization capabilities in modern multi-core processors to create a more flexible and adaptable LWK environment. The most significant technical accomplishments of this project were the development of the Kitten lightweight kernel, the co-development of the SMARTMAP intra-node memory mapping technique, and the development and demonstration of a scalable virtualization environment for HPC. Each of these topics is presented in this report by the inclusion of a published or submitted research paper. The results of this project are being leveraged by several ongoing and new research projects.

  11. Advancements and performance of iterative methods in industrial applications codes on CRAY parallel/vector supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Poole, G.; Heroux, M. [Engineering Applications Group, Eagan, MN (United States)

    1994-12-31

    This paper will focus on recent work in two widely used industrial applications codes with iterative methods. The ANSYS program, a general purpose finite element code widely used in structural analysis applications, has now added an iterative solver option. Some results are given from real applications comparing performance with the tradition parallel/vector frontal solver used in ANSYS. Discussion of the applicability of iterative solvers as a general purpose solver will include the topics of robustness, as well as memory requirements and CPU performance. The FIDAP program is a widely used CFD code which uses iterative solvers routinely. A brief description of preconditioners used and some performance enhancements for CRAY parallel/vector systems is given. The solution of large-scale applications in structures and CFD includes examples from industry problems solved on CRAY systems.

  12. Scientific articles recommendation with topic regression and relational matrix factorization

    Institute of Scientific and Technical Information of China (English)

    Ming YANG; Ying-ming LI; Zhongfei(Mark)ZHANG

    2014-01-01

    In this paper we study the problem of recommending scientifi c articles to users in an online community with a new perspective of considering topic regression modeling and articles relational structure analysis simultane-ously. First, we present a novel topic regression model, the topic regression matrix factorization (tr-MF), to solve the problem. The main idea of tr-MF lies in extending the matrix factorization with a probabilistic topic modeling. In particular, tr-MF introduces a regression model to regularize user factors through probabilistic topic modeling under the basic hypothesis that users share similar preferences if they rate similar sets of items. Consequently, tr-MF provides interpretable latent factors for users and items, and makes accurate predictions for community users. To incorporate the relational structure into the framework of tr-MF, we introduce relational matrix factorization. Through combining tr-MF with the relational matrix factorization, we propose the topic regression collective matrix factorization (tr-CMF) model. In addition, we also present the collaborative topic regression model with relational matrix factorization (CTR-RMF) model, which combines the existing collaborative topic regression (CTR) model and relational matrix factorization (RMF). From this point of view, CTR-RMF can be considered as an appropriate baseline for tr-CMF. Further, we demonstrate the efficacy of the proposed models on a large subset of the data from CiteULike, a bibliography sharing service dataset. The proposed models outperform the state-of-the-art matrix factorization models with a signifi cant margin. Specifi cally, the proposed models are effective in making predictions for users with only few ratings or even no ratings, and support tasks that are specifi c to a certain fi eld, neither of which has been addressed in the existing literature.

  13. Do scientists trace hot topics?

    Science.gov (United States)

    Wei, Tian; Li, Menghui; Wu, Chensheng; Yan, Xiao-Yong; Fan, Ying; Di, Zengru; Wu, Jinshan

    2013-01-01

    Do scientists follow hot topics in their scientific investigations? In this paper, by performing analysis to papers published in the American Physical Society (APS) Physical Review journals, it is found that papers are more likely to be attracted by hot fields, where the hotness of a field is measured by the number of papers belonging to the field. This indicates that scientists generally do follow hot topics. However, there are qualitative differences among scientists from various countries, among research works regarding different number of authors, different number of affiliations and different number of references. These observations could be valuable for policy makers when deciding research funding and also for individual researchers when searching for scientific projects.

  14. Topics in computational linear optimization

    DEFF Research Database (Denmark)

    Hultberg, Tim Helge

    2000-01-01

    . Linear optimization problems covers both linear programming problems, which are polynomially solvable, and mixed integer linear programming problems, which belong to the class of NP-hard problems. The three main reasons for the practical succes of linear optimization are: wide applicability, availabilty...... of high quality solvers and the use of algebraic modelling systems to handle the communication between the modeller and the solver. This dissertation features four topics in computational linear optimization: A) automatic reformulation of mixed 0/1 linear programs, B) direct solution of sparse unsymmetric...... systems of linear equations, C) reduction of linear programs and D) integration of algebraic modelling of linear optimization problems in C++. Each of these topics is treated in a separate paper included in this dissertation. The efficiency of solving mixed 0-1 linear programs by linear programming based...

  15. Hot topics in functional neuroradiology.

    Science.gov (United States)

    Faro, S H; Mohamed, F B; Helpern, J A; Jensen, J H; Thulborn, K R; Atkinson, I C; Sair, H I; Mikulis, D J

    2013-12-01

    Functional neuroradiology represents a relatively new and ever-growing subspecialty in the field of neuroradiology. Neuroradiology has evolved beyond anatomy and basic tissue signal characteristics and strives to understand the underlying physiologic processes of central nervous system disease. The American Society of Functional Neuroradiology sponsors a yearly educational and scientific meeting, and the educational committee was asked to suggest a few cutting-edge functional neuroradiology techniques (hot topics). The following is a review of several of these topics and includes "Diffusion Tensor Imaging of the Pediatric Spinal Cord"; "Diffusional Kurtosis Imaging"; "From Standardization to Quantification: Beyond Biomarkers toward Bioscales as Neuro MR Imaging Surrogates of Clinical End Points"; Resting-State Functional MR Imaging"; and "Current Use of Cerebrovascular Reserve Imaging."

  16. Retapamulin: A newer topical antibiotic

    Directory of Open Access Journals (Sweden)

    D Dhingra

    2013-01-01

    Full Text Available Impetigo is a common childhood skin infection. There are reports of increasing drug resistance to the currently used topical antibiotics including fusidic acid and mupirocin. Retapamulin is a newer topical agent of pleuromutilin class approved by the Food and Drug Administration for treatment of impetigo in children and has been recently made available in the Indian market. It has been demonstrated to have low potential for the development of antibacterial resistance and a high degree of potency against poly drug resistant Gram-positive bacteria found in skin infections including Staphylococcus aureus strains. The drug is safe owing to low systemic absorption and has only minimal side-effect of local irritation at the site of application.

  17. Topics in computational linear optimization

    DEFF Research Database (Denmark)

    Hultberg, Tim Helge

    2000-01-01

    of high quality solvers and the use of algebraic modelling systems to handle the communication between the modeller and the solver. This dissertation features four topics in computational linear optimization: A) automatic reformulation of mixed 0/1 linear programs, B) direct solution of sparse unsymmetric...... systems of linear equations, C) reduction of linear programs and D) integration of algebraic modelling of linear optimization problems in C++. Each of these topics is treated in a separate paper included in this dissertation. The efficiency of solving mixed 0-1 linear programs by linear programming based...... reductions. In the fourth and last paper, a prototype implementation of a C++ class library, FLOPC++, for formulating linear optimization problems is presented. Using FLOPC++, linear optimization models can be specified in a declarative style, similar to algebraic modelling languages such as GAMS and AMPL...

  18. Topics on nonlinear generalized functions

    CERN Document Server

    Colombeau, J F

    2011-01-01

    The aim of this paper is to give the text of a recent introduction to nonlinear generalized functions exposed in my talk in the congress gf2011, which was asked by several participants. Three representative topics were presented: two recalls "Nonlinear generalized functions and their connections with distribution theory", "Examples of applications", and a recent development: "Locally convex topologies and compactness: a functional analysis of nonlinear generalized functions".

  19. Stochastic Analysis and Related Topics

    CERN Document Server

    Ustunel, Ali

    1988-01-01

    The Silvri Workshop was divided into a short summer school and a working conference, producing lectures and research papers on recent developments in stochastic analysis on Wiener space. The topics treated in the lectures relate to the Malliavin calculus, the Skorohod integral and nonlinear functionals of white noise. Most of the research papers are applications of these subjects. This volume addresses researchers and graduate students in stochastic processes and theoretical physics.

  20. Hot topics from the Tevatron

    Energy Technology Data Exchange (ETDEWEB)

    Glenzinski, D.; /Fermilab

    2008-01-01

    The Tevatron Run-II began in March 2001. To date, both the CDF and D0 experiments have collected 1 fb{sup -1} of data each. The results obtained from this data set were summarized at this conference in 39 parallel session presentations covering a wide range of topics. The author summarizes the most important of those results here and comments on some of the prospects for the future.

  1. Topical Ocular Delivery of NSAIDs

    OpenAIRE

    Ahuja, Munish; Avinash S Dhake; Sharma, Surendra K; Dipak K Majumdar

    2008-01-01

    In ocular tissue, arachidonic acid is metabolized by cyclooxygenase to prostaglandins which are the most important lipid derived mediators of inflammation. Presently nonsteroidal anti-inflammatory drugs (NSAIDs) which are cyclooxygenase (COX) inhibitors are being used for the treatment of inflammatory disorders. NSAIDs used in ophthalmology, topically, are salicylic-, indole acetic-, aryl acetic-, aryl propionic- and enolic acid derivatives. NSAIDs are weak acids with pKa mostly between 3.5 a...

  2. Topic Detection in Online Chat

    Science.gov (United States)

    2009-09-01

    including the time for reviewing instruction, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the...you. “ … Of making many books there is no end, and much study is a weariness of the flesh.” Ecclesiastes 12:12 , ESV xvi THIS PAGE INTENTIONALLY...topic detection," Online Information Review , vol. 30, pp. 496–516, 2006. [2] D. Jurafsky and J. H. Martin, Speech and Language Processing : An

  3. Topical fluoride for caries prevention

    Science.gov (United States)

    Weyant, Robert J.; Tracy, Sharon L.; Anselmo, Theresa (Tracy); Beltrán-Aguilar, Eugenio D.; Donly, Kevin J.; Frese, William A.; Hujoel, Philippe P.; Iafolla, Timothy; Kohn, William; Kumar, Jayanth; Levy, Steven M.; Tinanoff, Norman; Wright, J. Timothy; Zero, Domenick; Aravamudhan, Krishna; Frantsve-Hawley, Julie; Meyer, Daniel M.

    2015-01-01

    Background A panel of experts convened by the American Dental Association (ADA) Council on Scientific Affairs presents evidence-based clinical recommendations regarding professionally applied and prescription-strength, home-use topical fluoride agents for caries prevention. These recommendations are an update of the 2006 ADA recommendations regarding professionally applied topical fluoride and were developed by using a new process that includes conducting a systematic review of primary studies. Types of Studies Reviewed The authors conducted a search of MEDLINE and the Cochrane Library for clinical trials of professionally applied and prescription-strength topical fluoride agents—including mouthrinses, varnishes, gels, foams and pastes—with caries increment outcomes published in English through October 2012. Results The panel included 71 trials from 82 articles in its review and assessed the efficacy of various topical fluoride caries-preventive agents. The panel makes recommendations for further research. Practical Implications The panel recommends the following for people at risk of developing dental caries: 2.26 percent fluoride varnish or 1.23 percent fluoride (acidulated phosphate fluoride) gel, or a prescription-strength, home-use 0.5 percent fluoride gel or paste or 0.09 percent fluoride mouthrinse for patients 6 years or older. Only 2.26 percent fluoride varnish is recommended for children younger than 6 years. The strengths of the recommendations for the recommended products varied from “in favor” to “expert opinion for.” As part of the evidence-based approach to care, these clinical recommendations should be integrated with the practitioner's professional judgment and the patient's needs and preferences. PMID:24177407

  4. Probabilistic analysis and related topics

    CERN Document Server

    Bharucha-Reid, A T

    1983-01-01

    Probabilistic Analysis and Related Topics, Volume 3 focuses on the continuity, integrability, and differentiability of random functions, including operator theory, measure theory, and functional and numerical analysis. The selection first offers information on the qualitative theory of stochastic systems and Langevin equations with multiplicative noise. Discussions focus on phase-space evolution via direct integration, phase-space evolution, linear and nonlinear systems, linearization, and generalizations. The text then ponders on the stability theory of stochastic difference systems and Marko

  5. Probabilistic analysis and related topics

    CERN Document Server

    Bharucha-Reid, A T

    1979-01-01

    Probabilistic Analysis and Related Topics, Volume 2 focuses on the integrability, continuity, and differentiability of random functions, as well as functional analysis, measure theory, operator theory, and numerical analysis.The selection first offers information on the optimal control of stochastic systems and Gleason measures. Discussions focus on convergence of Gleason measures, random Gleason measures, orthogonally scattered Gleason measures, existence of optimal controls without feedback, random necessary conditions, and Gleason measures in tensor products. The text then elaborates on an

  6. Topical management of facial burns.

    Science.gov (United States)

    Leon-Villapalos, Jorge; Jeschke, Marc G; Herndon, David N

    2008-11-01

    The face is the central point of the physical features of the human being. It transmits expressions and emotions, communicates feelings and allows for individual identity. It contains complex musculature and a pliable and unique skin envelope that reacts to the environment through a vast network of nerve endings. The face hosts vital areas that make phonation, feeding, and vision possible. Facial burns disrupt these anatomical and functional structures creating pain, deformity, swelling, and contractures that may lead to lasting physical and psychological sequelae. The management of facial burns may include operative and non-operative treatment or both, depending on the depth and extent of the burn. This paper intends to provide a review of the available options for topical management of facial burns. Topical agents will be defined as any agent applied to the surface of the skin that alters the outcome of the facial burn. Therefore, the classic concept of topical therapy will be expanded and developed within two major stages: acute and rehabilitation. Comparison of the effectiveness of the different treatments and relevant literature will be discussed.

  7. Emollients: application of topical treatments to the skin.

    Science.gov (United States)

    Dunning, Gail

    Nurses working in various clinical settings can make a real difference to the clinical effectiveness of topical applications to the skin by increasing their knowledge of the choice and mechanisms involved. This is particularly the case for nurses who are non-medical prescribers when considering first-line therapies for patients with eczema. It is important for all nurses to acknowledge that topical treatments, however simple, are a form of drug therapy and must be given the same considerations for their specific actions whether you are prescribing, applying or reviewing their effect. Emollients have an as important part to play in the treatment of all dry skin conditions. This article focuses on increasing the reader's knowledge of the application of topical emollient therapy, as well as highlighting some practical tips to consider.

  8. Medicare Payment: Surgical Dressings and Topical Wound Care Products.

    Science.gov (United States)

    Schaum, Kathleen D

    2014-08-01

    Medicare patients' access to surgical dressings and topical wound care products is greatly influenced by the Medicare payment system that exists in each site of care. Qualified healthcare professionals should consider these payment systems, as well as the medical necessity for surgical dressings and topical wound care products. Scientists and manufacturers should also consider these payment systems, in addition to the Food and Drug Administration requirements for clearance or approval, when they are developing new surgical dressings and topical wound care products. Due to the importance of the Medicare payment systems, this article reviews the Medicare payment systems in acute care hospitals, long-term acute care hospitals, skilled nursing facilities, home health agencies, durable medical equipment suppliers, hospital-based outpatient wound care departments, and qualified healthcare professional offices.

  9. Topical and oral antibiotics for acne vulgaris.

    Science.gov (United States)

    Del Rosso, James Q

    2016-06-01

    Antibiotics, both oral and topical, have been an integral component of the management of acne vulgaris (AV) for approximately 6 decades. Originally thought to be effective for AV due to their ability to inhibit proliferation of Propionibacterium acnes, it is now believed that at least some antibiotics also exert anti-inflammatory effects that provide additional therapeutic benefit. To add, an increase in strains of P acnes and other exposed bacteria that are less sensitive to antibiotics used to treat AV have emerged, with resistance directly correlated geographically with the magnitude of antibiotic use. Although antibiotics still remain part of the therapeutic armamentarium for AV treatment, current recommendations support the following when used to treat AV: 1) monotherapy use should be avoided; 2) use benzoyl peroxide concomitantly to reduce emergence of resistant P acnes strains; 3) oral antibiotics should be used in combination with a topical regimen for moderate-to-severe inflammatory AV; and 4) use oral antibiotics over a limited duration to achieve control of inflammatory AV with an exit plan in place to discontinue their use as soon as possible. When selecting an oral antibiotic to treat AV, potential adverse effects are important to consider.

  10. Stability of ascorbyl palmitate in topical microemulsions.

    Science.gov (United States)

    Spiclin, P; Gasperlin, M; Kmetec, V

    2001-07-17

    Ascorbyl palmitate and sodium ascorbyl phosphate are derivatives of ascorbic acid, which differ in stability and hydro-lipophilic properties. They are widely used in cosmetic and pharmaceutical preparations. In the present work the stability of both derivatives was studied in microemulsions for topical use as carrier systems. The microemulsions were of both o/w and w/o types and composed of the same ingredients. The stability of the less stable derivative ascorbyl palmitate was tested under different conditions to evaluate the influence of initial concentration, location in microemulsion, dissolved oxygen and storage conditions. High concentrations of ascorbyl palmitate reduced the extent of its degradation. The location of ascorbyl palmitate in the microemulsion and oxygen dissolved in the system together significantly influence the stability of the compound. Light accelerated the degradation of ascorbyl palmitate. In contrast, sodium ascorbyl phosphate was stable in both types of microemulsions. Sodium ascorbyl phosphate is shown to be convenient as an active ingredient in topical preparations. In the case of ascorbyl palmitate, long-term stability in selected microemulsions was not adequate. To formulate an optimal carrier system for this ingredient other factors influencing the stability have to be considered.

  11. Ready Reference Tools: EBSCO Topic Search and SIRS Researcher.

    Science.gov (United States)

    Goins, Sharon; Dayment, Lu

    1998-01-01

    Discussion of ready reference and current events collections in high school libraries focuses on a comparison of two CD-ROM services, EBSCO Topic Search and the SIRS Researcher. Considers licensing; access; search strategies; viewing articles; currency; printing; added value features; and advantages of CD-ROMs. (LRW)

  12. Ready Reference Tools: EBSCO Topic Search and SIRS Researcher.

    Science.gov (United States)

    Goins, Sharon; Dayment, Lu

    1998-01-01

    Discussion of ready reference and current events collections in high school libraries focuses on a comparison of two CD-ROM services, EBSCO Topic Search and the SIRS Researcher. Considers licensing; access; search strategies; viewing articles; currency; printing; added value features; and advantages of CD-ROMs. (LRW)

  13. Photodynamic therapy versus topical imiquimod versus topical fluorouracil for treatment of superficial basal-cell carcinoma : a single blind, non-inferiority, randomised controlled trial

    NARCIS (Netherlands)

    Arits, Aimee H. M. M.; Mosterd, Klara; Essers, Brigitte A. B.; Spoorenberg, Eefje; Sommer, Anja; De Rooij, Michette J. M.; van Pelt, Han P. A.; Quaedvlieg, Patricia J. F.; Krekels, Gertruud A. M.; van Neer, Pierre A. F. A.; Rijzewijk, Joris J.; van Geest, Adrienne J.; Steijlen, Peter M.; Nelemans, Patty J.; Kelleners-Smeets, Nicole W. J.

    Background Superficial basal-cell carcinoma is most commonly treated with topical non-surgical treatments, such as photodynamic therapy or topical creams. Photodynamic therapy is considered the preferable treatment, although this has not been previously tested in a randomised control trial. We

  14. Photodynamic therapy versus topical imiquimod versus topical fluorouracil for treatment of superficial basal-cell carcinoma : a single blind, non-inferiority, randomised controlled trial

    NARCIS (Netherlands)

    Arits, Aimee H. M. M.; Mosterd, Klara; Essers, Brigitte A. B.; Spoorenberg, Eefje; Sommer, Anja; De Rooij, Michette J. M.; van Pelt, Han P. A.; Quaedvlieg, Patricia J. F.; Krekels, Gertruud A. M.; van Neer, Pierre A. F. A.; Rijzewijk, Joris J.; van Geest, Adrienne J.; Steijlen, Peter M.; Nelemans, Patty J.; Kelleners-Smeets, Nicole W. J.

    2013-01-01

    Background Superficial basal-cell carcinoma is most commonly treated with topical non-surgical treatments, such as photodynamic therapy or topical creams. Photodynamic therapy is considered the preferable treatment, although this has not been previously tested in a randomised control trial. We asses

  15. Topical tar: Back to the future

    Energy Technology Data Exchange (ETDEWEB)

    Paghdal, K.V.; Schwartz, R.A. [University of Medicine & Dentistry of New Jersey, Newark, NJ (United States)

    2009-08-15

    The use of medicinal tar for dermatologic disorders dates back to the ancient times. Although coal tar is utilized more frequently in modern dermatology, wood tars have also been widely employed. Tar is used mainly in the treatment of chronic stable plaque psoriasis, scalp psoriasis, atopic dermatitis, and seborrheic dermatitis, either alone or in combination therapy with other medications, phototherapy, or both. Many modifications have been made to tar preparations to increase their acceptability, as some dislike its odor, messy application, and staining of clothing. One should consider a tried and true treatment with tar that has led to clearing of lesions and prolonged remission times. Occupational studies have demonstrated the carcinogenicity of tar; however, epidemiologic studies do not confirm similar outcomes when used topically. This article will review the pharmacology, formulations, efficacy, and adverse effects of crude coal tar and other tars in the treatment of selected dermatologic conditions.

  16. Topic extraction from adverbial clauses

    Directory of Open Access Journals (Sweden)

    Carlos Rubio Alcalá

    2016-06-01

    Full Text Available This paper offers new data to support findings about Topic extraction from adverbial clauses. Since such clauses are strong islands, they should not allow extraction of any kind, but we show here that if the appropriate conditions are met, Topics of the CLLD kind in Romance can move out of them. We propose that two conditions must be met for such movement to be possible: the first is that the adverbial clause must have undergone topicalisation in the first place; the second is that the adverbial clause is inherently topical from a semantic viewpoint. Contrast with other language families (Germanic, Quechua and Japanese is provided and the semantic implications of the proposal are briefly discussed. Keywords: topicalisation; Clitic Left Dislocation; syntactic islands; adverbial clauses Este artículo ofrece nuevos datos sobre la extracción de Tópicos desde oraciones subordinadas adverbiales. Dado que dichas oraciones son islas fuertes, no deberían permitir extracción de ningún tipo, pero mostramos que si se dan las condiciones apropiadas, los Tópicos del tipo CLLD en lenguas románicas pueden desplazarse fuera de ellas. Proponemos que se deben cumplir dos condiciones para que ese movimiento sea posible: la primera es que la propia subordinada adverbial se haya topicalizado en primer lugar; la segunda es que la subordinada adverbial sea inherentemente un Tópico desde el punto de vista semántico. Proporcionamos también algunos contrastes con otras familias lingüísticas (germánica, quechua y japonés y se discuten brevemente las implicaciones semánticas de la propuesta. Palabras clave: topicalización; dislocación a la izquierda con clítico; islas sintácticas; oraciones adverbiales

  17. Tactile friction of topical formulations.

    Science.gov (United States)

    Skedung, L; Buraczewska-Norin, I; Dawood, N; Rutland, M W; Ringstad, L

    2016-02-01

    The tactile perception is essential for all types of topical formulations (cosmetic, pharmaceutical, medical device) and the possibility to predict the sensorial response by using instrumental methods instead of sensory testing would save time and cost at an early stage product development. Here, we report on an instrumental evaluation method using tactile friction measurements to estimate perceptual attributes of topical formulations. Friction was measured between an index finger and an artificial skin substrate after application of formulations using a force sensor. Both model formulations of liquid crystalline phase structures with significantly different tactile properties, as well as commercial pharmaceutical moisturizing creams being more tactile-similar, were investigated. Friction coefficients were calculated as the ratio of the friction force to the applied load. The structures of the model formulations and phase transitions as a result of water evaporation were identified using optical microscopy. The friction device could distinguish friction coefficients between the phase structures, as well as the commercial creams after spreading and absorption into the substrate. In addition, phase transitions resulting in alterations in the feel of the formulations could be detected. A correlation was established between skin hydration and friction coefficient, where hydrated skin gave rise to higher friction. Also a link between skin smoothening and finger friction was established for the commercial moisturizing creams, although further investigations are needed to analyse this and correlations with other sensorial attributes in more detail. The present investigation shows that tactile friction measurements have potential as an alternative or complement in the evaluation of perception of topical formulations. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Potential theory—selected topics

    CERN Document Server

    Aikawwa, Hiroaki

    1996-01-01

    The first part of these lecture notes is an introduction to potential theory to prepare the reader for later parts, which can be used as the basis for a series of advanced lectures/seminars on potential theory/harmonic analysis. Topics covered in the book include minimal thinness, quasiadditivity of capacity, applications of singular integrals to potential theory, L(p)-capacity theory, fine limits of the Nagel-Stein boundary limit theorem and integrability of superharmonic functions. The notes are written for an audience familiar with the theory of integration, distributions and basic functional analysis.

  19. Topics in atomic collision theory

    CERN Document Server

    Geltman, Sydney; Brueckner, Keith A

    1969-01-01

    Topics in Atomic Collision Theory originated in a course of graduate lectures given at the University of Colorado and at University College in London. It is recommended for students in physics and related fields who are interested in the application of quantum scattering theory to low-energy atomic collision phenomena. No attention is given to the electromagnetic, nuclear, or elementary particle domains. The book is organized into three parts: static field scattering, electron-atom collisions, and atom-atom collisions. These are in the order of increasing physical complexity and hence necessar

  20. Timely topics in pediatric psychiatry.

    Science.gov (United States)

    Dineen Wagner, Karen

    2014-11-01

    This section of Focus on Childhood and Adolescent Mental Health presents findings on an array of topics including inflammation and child and adolescent depression, glutamatergic dysregulation and pediatric psychiatric disorders, predictors of bipolar disorder in children with attention-deficit/hyperactivity disorder (ADHD), and the continuum between obsessive-compulsive personality disorder (OCPD) and obsessive-compulsive disorder (OCD). There is increased interest in the role of inflammation in psychiatric disorders. Kim and colleagues conducted a systematic literature review to examine the relationships between inflammatory processes, inflammation, medical conditions, and depression and suicidality in children and adolescents. © Copyright 2014 Physicians Postgraduate Press, Inc.

  1. Topics in Electricity Transmission Pricing

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerndal, Mette

    2000-02-01

    Within the last decade we have experienced deregulation of several industries, such as airlines, telecommunications and the electric utility industry, the last-mentioned being the focus of this work. Both the telecommunications and the electricity sector depend on network facilities, some of which are still considered as natural monopolies. In these industries, open network access is regarded as crucial in order to achieve the gains from increased competition, and transmission tariffs are important in implementing this. Based on the Energy Act that was introduced in 1991, Norway was among the first countries to restructure its electricity sector. On the supply side there are a large number of competing firms, almost exclusively hydro plants, with a combined capacity of about 23000 MW, producing 105-125 TWh per year, depending on the availability of water. Hydro plants are characterized by low variable costs of operation, however since water may be stored in dams, water has an opportunity cost, generally known as the water value, which is the shadow price of water when solving the generator's inter temporal profit maximization problem. Water values are the main factor of the producers' short run marginal cost. Total consumption amounts to 112-117 TWh a year, and consumers, even households, may choose their electricity supplier independent of the local distributor to which the customer is connected. In fact, approximately 10% of the households have actually changed supplier. The web-site www.konkurransetilsynet.no indicates available contracts, and www.dinside.no provides an ''energy-calculator'' where one can check whether it is profitable to switch supplier. If a customer buys energy from a remote supplier, the local distributor only provides transportation facilities for the energy and is compensated accordingly. Transmission and distribution have remained monopolized and regulated by the Norwegian Water Resources and Energy

  2. Topics on Electricity Transmission Pricing

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerndal, Mette

    2000-02-01

    Within the last decade we have experienced deregulation of several industries, such as airlines, telecommunications and the electric utility industry, the last-mentioned being the focus of this work. Both the telecommunications and the electricity sector depend on network facilities, some of which are still considered as natural monopolies. In these industries, open network access is regarded as crucial in order to achieve the gains from increased competition, and transmission tariffs are important in implementing this. Based on the Energy Act that was introduced in 1991, Norway was among the first countries to restructure its electricity sector. On the supply side there are a large number of competing firms, almost exclusively hydro plants, with a combined capacity of about 23000 MW, producing 105-125 TWh per year, depending on the availability of water. Hydro plants are characterized by low variable costs of operation, however since water may be stored in dams, water has an opportunity cost, generally known as the water value, which is the shadow price of water when solving the generator's inter temporal profit maximization problem. Water values are the main factor of the producers' short run marginal cost. Total consumption amounts to 112-117 TWh a year, and consumers, even households, may choose their electricity supplier independent of the local distributor to which the customer is connected. In fact, approximately 10% of the households have actually changed supplier. The web-site www.konkurransetilsynet.no indicates available contracts, and www.dinside.no provides an ''energy-calculator'' where one can check whether it is profitable to switch supplier. If a customer buys energy from a remote supplier, the local distributor only provides transportation facilities for the energy and is compensated accordingly. Transmission and distribution have remained monopolized and regulated by the Norwegian Water Resources and Energy

  3. Topics in Electricity Transmission Pricing

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerndal, Mette

    2000-02-01

    Within the last decade we have experienced deregulation of several industries, such as airlines, telecommunications and the electric utility industry, the last-mentioned being the focus of this work. Both the telecommunications and the electricity sector depend on network facilities, some of which are still considered as natural monopolies. In these industries, open network access is regarded as crucial in order to achieve the gains from increased competition, and transmission tariffs are important in implementing this. Based on the Energy Act that was introduced in 1991, Norway was among the first countries to restructure its electricity sector. On the supply side there are a large number of competing firms, almost exclusively hydro plants, with a combined capacity of about 23000 MW, producing 105-125 TWh per year, depending on the availability of water. Hydro plants are characterized by low variable costs of operation, however since water may be stored in dams, water has an opportunity cost, generally known as the water value, which is the shadow price of water when solving the generator's inter temporal profit maximization problem. Water values are the main factor of the producers' short run marginal cost. Total consumption amounts to 112-117 TWh a year, and consumers, even households, may choose their electricity supplier independent of the local distributor to which the customer is connected. In fact, approximately 10% of the households have actually changed supplier. The web-site www.konkurransetilsynet.no indicates available contracts, and www.dinside.no provides an ''energy-calculator'' where one can check whether it is profitable to switch supplier. If a customer buys energy from a remote supplier, the local distributor only provides transportation facilities for the energy and is compensated accordingly. Transmission and distribution have remained monopolized and regulated by the Norwegian Water Resources and Energy

  4. Topics on Electricity Transmission Pricing

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerndal, Mette

    2000-02-01

    Within the last decade we have experienced deregulation of several industries, such as airlines, telecommunications and the electric utility industry, the last-mentioned being the focus of this work. Both the telecommunications and the electricity sector depend on network facilities, some of which are still considered as natural monopolies. In these industries, open network access is regarded as crucial in order to achieve the gains from increased competition, and transmission tariffs are important in implementing this. Based on the Energy Act that was introduced in 1991, Norway was among the first countries to restructure its electricity sector. On the supply side there are a large number of competing firms, almost exclusively hydro plants, with a combined capacity of about 23000 MW, producing 105-125 TWh per year, depending on the availability of water. Hydro plants are characterized by low variable costs of operation, however since water may be stored in dams, water has an opportunity cost, generally known as the water value, which is the shadow price of water when solving the generator's inter temporal profit maximization problem. Water values are the main factor of the producers' short run marginal cost. Total consumption amounts to 112-117 TWh a year, and consumers, even households, may choose their electricity supplier independent of the local distributor to which the customer is connected. In fact, approximately 10% of the households have actually changed supplier. The web-site www.konkurransetilsynet.no indicates available contracts, and www.dinside.no provides an ''energy-calculator'' where one can check whether it is profitable to switch supplier. If a customer buys energy from a remote supplier, the local distributor only provides transportation facilities for the energy and is compensated accordingly. Transmission and distribution have remained monopolized and regulated by the Norwegian Water Resources and Energy

  5. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry

    2017-02-27

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey\\'s acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less

  6. Hurricane Modeling and Supercomputing: Can a global mesoscale model be useful in improving forecasts of tropical cyclogenesis?

    Science.gov (United States)

    Shen, B.; Tao, W.; Atlas, R.

    2007-12-01

    Hurricane modeling, along with guidance from observations, has been used to help construct hurricane theories since the 1960s. CISK (conditional instability of the second kind, Charney and Eliassen 1964; Ooyama 1964,1969) and WISHE (wind-induced surface heat exchange, Emanuel 1986) are among the well-known theories being used to understand hurricane intensification. For hurricane genesis, observations have indicated the importance of large-scale flows (e.g., the Madden-Julian Oscillation or MJO, Maloney and Hartmann, 2000) on the modulation of hurricane activity. Recent modeling studies have focused on the role of the MJO and Rossby waves (e.g., Ferreira and Schubert, 1996; Aivyer and Molinari, 2003) and/or the interaction of small-scale vortices (e.g., Holland 1995; Simpson et al. 1997; Hendrick et al. 2004), of which determinism could be also built by large-scale flows. The aforementioned studies suggest a unified view on hurricane formation, consisting of multiscale processes such as scale transition (e.g., from the MJO to Equatorial Rossby Waves and from waves to vortices), and scale interactions among vortices, convection, and surface heat and moisture fluxes. To depict the processes in the unified view, a high-resolution global model is needed. During the past several years, supercomputers have enabled the deployment of ultra-high resolution global models, obtaining remarkable forecasts of hurricane track and intensity (Atlas et al. 2005; Shen et al. 2006). In this work, hurricane genesis is investigated with the aid of a global mesoscale model on the NASA Columbia supercomputer by conducting numerical experiments on the genesis of six consecutive tropical cyclones (TCs) in May 2002. These TCs include two pairs of twin TCs in the Indian Ocean, Supertyphoon Hagibis in the West Pacific Ocean and Hurricane Alma in the East Pacific Ocean. It is found that the model is capable of predicting the genesis of five of these TCs about two to three days in advance. Our

  7. Topical cidofovir for plantar warts.

    Science.gov (United States)

    Padilla España, Laura; Del Boz, Javier; Fernández Morano, Teresa; Arenas Villafranca, Javier; de Troya Martín, Magdalena

    2014-01-01

    Plantar warts are a common reason for dermatological consultations and their treatment can occasionally be a challenge. Plantar warts are benign lesions produced by the human papillomavirus (HPV) that often fail to respond to habitual treatment. Cidofovir is a potent antiviral drug that acts competitively, inhibiting viral DNA polymerase. Our aim was to assess the efficacy and safety of cidofovir cream for the treatment of viral plantar warts. We undertook a retrospective observational study of patients with plantar warts who received treatment with topical cidofovir between July 2008 and July 2011 at the Dermatology Service of the Hospital Costa del Sol, Marbella, Spain. Data about the rate of treatment response, the adverse effects, and recurrences, as well as the characteristics of the patient cohort, were recorded. We identified 35 patients who had received some previous treatment. The usual concentration was 3% (in 33 of 35 cases), applied twice a day (in 31 of 35 cases). A greater or lesser response was noted in 28 cases. There were two recurrences. Topical cidofovir seems to be a useful alternative for the therapeutic management of recalcitrant plantar common warts that fail to respond to usual treatment.

  8. Human-competitive automatic topic indexing

    CERN Document Server

    Medelyan, Olena

    2009-01-01

    Topic indexing is the task of identifying the main topics covered by a document. These are useful for many purposes: as subject headings in libraries, as keywords in academic publications and as tags on the web. Knowing a document’s topics helps people judge its relevance quickly. However, assigning topics manually is labor intensive. This thesis shows how to generate them automatically in a way that competes with human performance. Three kinds of indexing are investigated: term assignment, a task commonly performed by librarians, who select topics from a controlled vocabulary; tagging, a popular activity of web users, who choose topics freely; and a new method of keyphrase extraction, where topics are equated to Wikipedia article names. A general two-stage algorithm is introduced that first selects candidate topics and then ranks them by significance based on their properties. These properties draw on statistical, semantic, domain-specific and encyclopedic knowledge. They are combined using a machine learn...

  9. Topical Pain Relievers May Cause Burns

    Science.gov (United States)

    ... Consumers Consumer Updates Topical Pain Relievers May Cause Burns Share Tweet Linkedin Pin it More sharing options ... rare, have ranged from mild to severe chemical burns with use of such brand-name topical muscle ...

  10. Topic prominence in Chinese EFL learners’ interlanguage

    Directory of Open Access Journals (Sweden)

    Shaopeng Li

    2014-01-01

    Full Text Available The present study aims to investigate the general characteristics of topicprominent typological interlanguage development of Chinese learners of English in terms of acquiring subject-prominent English structures from a discourse perspective. Topic structures mainly appear in Chinese discourse in the form of topic chains (Wang, 2002; 2004. The research target are the topic chain, which is the main topic-prominent structure in Chinese discourse, and zero anaphora, which is the most common topic anaphora in the topic chain. Two important findings emerged from the present study. First, the characteristics of Chinese topic chains are transferrable to the interlanguage of Chinese EFL learners, thus resulting in overgeneralization of the zero anaphora. Second, the interlanguage discourse of Chinese EFL learners reflects a change of the second language acquisition process from topic-prominence to subject-prominence, thus lending support to the discourse transfer hypothesis.

  11. Topical Conference on Oportunities in Biology for Physicists II

    Energy Technology Data Exchange (ETDEWEB)

    Franz, Judy R.

    2004-02-01

    In 2002, the American Physical Society (APS) organized and held the first topical conference in Boston, MA, as a way of informing physicists, particularly those just entering the field, of opportunities emerging at the interface of physics and biology. Because of the tremendous success of the first conference, it was decided to organize a second conference, similar in nature and focus, but with different presentation topic areas. Again the intended audience would be graduate students and postdocs considering applying methods of physics to biological research, and those who advise others on such opportunities.

  12. Optical systems for synchrotron radiation. Lecture 1. Introductory topics. Revision

    Energy Technology Data Exchange (ETDEWEB)

    Howells, M.R.

    1986-02-01

    Various fundamental topics are considered which underlie the design and use of optical systems for synchrotron radiation. The point of view of linear system theory is chosen which acts as a unifying concept throughout the series. In this context the important optical quantities usually appear as either impulse response functions (Green's functions) or frequency transfer functions (Fourier Transforms of the Green's functions). Topics include the damped harmonic oscillator, free-space optical field propagation, optical properties of materials, dispersion, and the Kramers-Kronig relations.

  13. An efficient highly parallel implementation of a large air pollution model on an IBM blue gene supercomputer

    Science.gov (United States)

    Ostromsky, Tz.; Georgiev, K.; Zlatev, Z.

    2012-10-01

    In this paper we discuss the efficient distributed-memory parallelization strategy of the Unified Danish Eulerian Model (UNI-DEM). We apply an improved decomposition strategy to the spatial domain in order to get more parallel tasks (based on the larger number of subdomains) with less communications between them (due to optimization of the overlapping area when the advection-diffusion problem is solved numerically). This kind of rectangular block partitioning (with a squareshape trend) allows us not only to increase significantly the number of potential parallel tasks, but also to reduce the local memory requirements per task, which is critical for the distributed-memory implementation of the higher-resolution/finergrid versions of UNI-DEM on some parallel systems, and particularly on the IBM BlueGene/P platform - our target hardware. We will show by experiments that our new parallel implementation can use rather efficiently the resources of the powerful IBM BlueGene/P supercomputer, the largest in Bulgaria, up to its full capacity. It turned out to be extremely useful in the large and computationally expensive numerical experiments, carried out to calculate some initial data for sensitivity analysis of the Danish Eulerian model.

  14. The ASCI Network for SC '99: A Step on the Path to a 100 Gigabit Per Second Supercomputing Network

    Energy Technology Data Exchange (ETDEWEB)

    PRATT,THOMAS J.; TARMAN,THOMAS D.; MARTINEZ,LUIS M.; MILLER,MARC M.; ADAMS,ROGER L.; CHEN,HELEN Y.; BRANDT,JAMES M.; WYCKOFF,PETER S.

    2000-07-24

    This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within the community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.

  15. Combining density functional theory calculations, supercomputing, and data-driven methods to design new materials (Conference Presentation)

    Science.gov (United States)

    Jain, Anubhav

    2017-04-01

    Density functional theory (DFT) simulations solve for the electronic structure of materials starting from the Schrödinger equation. Many case studies have now demonstrated that researchers can often use DFT to design new compounds in the computer (e.g., for batteries, catalysts, and hydrogen storage) before synthesis and characterization in the lab. In this talk, I will focus on how DFT calculations can be executed on large supercomputing resources in order to generate very large data sets on new materials for functional applications. First, I will briefly describe the Materials Project, an effort at LBNL that has virtually characterized over 60,000 materials using DFT and has shared the results with over 17,000 registered users. Next, I will talk about how such data can help discover new materials, describing how preliminary computational screening led to the identification and confirmation of a new family of bulk AMX2 thermoelectric compounds with measured zT reaching 0.8. I will outline future plans for how such data-driven methods can be used to better understand the factors that control thermoelectric behavior, e.g., for the rational design of electronic band structures, in ways that are different from conventional approaches.

  16. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-03-29

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  17. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    Science.gov (United States)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  18. Prioritizing dermatoses: rationally selecting guideline topics.

    Science.gov (United States)

    Borgonjen, R J; van Everdingen, J J E; van de Kerkhof, P C M; Spuls, Ph I

    2015-08-01

    Clinical practice guideline (CPG) development starts with selecting appropriate topics, as resources to develop a guideline are limited. However, a standardized method for topic selection is commonly missing and the way different criteria are used to prioritize is not clear. To select and prioritize dermatological topics for CPG development and elucidate criteria dermatologists find important in selecting guideline topics. All 410 dermatologists in the Netherlands were asked to create a top 20 of dermatological topics for which a guideline would be desirable, regardless of existing guidelines. They also rated, on a 5-point Likert scale, 10 determinative criteria derived from a combined search in literature and across (inter)national guideline developers. Top 20 topics received scores ranging from 0.01 to 0.2 and combined scores yielded a total score. The 118 surveys (response 29%) identified 157 different topics. Melanoma, squamous cell carcinoma, basal cell carcinoma, psoriasis and atopic dermatitis are top priority guideline topics. Venous leg ulcer, vasculitis, varicose veins, urticaria, acne, Lyme borreliosis, cutaneous lupus erythematosus, pruritus, syphilis, lymphoedema, decubitus ulcer, hidradenitis suppurativa, androgenic alopecia and bullous pemphigoïd complete the top 20. A further 15 topics have overlapping confidence intervals. Mortality and healthcare costs are regarded as less important criteria in topic selection (P topics. Respondents mostly agree with (inter)national guideline programmes and literature concerning the criteria important to selecting guideline topics. © 2014 European Academy of Dermatology and Venereology.

  19. Topics in Open Topological Strings

    CERN Document Server

    Prudenziati, Andrea

    2010-01-01

    This thesis is based on some selected topics in open topological string theory which I have worked on during my Ph.D. It comprises an introductory part where I have focused on the points most needed for the later chapters, trading completeness for conciseness and clarity. Then, following [12], we discuss tadpole cancellation for topological strings where we mainly show how its implementation is needed for ensuring the same "odd" moduli decoupling encountered in the closed theory. Next we move to analyse how the open and closed effective field theories for the B model interact writing the complete Lagrangian. We first check it deriving some already known tree level amplitudes in term of target space quantities, and then we extend the recipe to new results; later we implement open closed duality from a target field theory perspective. This last subject is also analysed from a worldsheet point of view extending the analysis of [13]. Some ideas for future research are briefly reported.

  20. Synergetics introduction and advanced topics

    CERN Document Server

    Haken, Hermann

    2004-01-01

    This book is an often-requested reprint of two classic texts by H. Haken: "Synergetics. An Introduction" and "Advanced Synergetics". Synergetics, an interdisciplinary research program initiated by H. Haken in 1969, deals with the systematic and methodological approach to the rapidly growing field of complexity. Going well beyond qualitative analogies between complex systems in fields as diverse as physics, chemistry, biology, sociology and economics, Synergetics uses tools from theoretical physics and mathematics to construct an unifying framework within which quantitative descriptions of complex, self-organizing systems can be made. This may well explain the timelessness of H. Haken's original texts on this topic, which are now recognized as landmarks in the field of complex systems. They provide both the beginning graduate student and the seasoned researcher with solid knowledge of the basic concepts and mathematical tools. Moreover, they admirably convey the spirit of the pioneering work by the founder of ...

  1. Topics in Banach space theory

    CERN Document Server

    Albiac, Fernando

    2016-01-01

    This text provides the reader with the necessary technical tools and background to reach the frontiers of research without the introduction of too many extraneous concepts. Detailed and accessible proofs are included, as are a variety of exercises and problems. The two new chapters in this second edition are devoted to two topics of much current interest amongst functional analysts: Greedy approximation with respect to bases in Banach spaces and nonlinear geometry of Banach spaces. This new material is intended to present these two directions of research for their intrinsic importance within Banach space theory, and to motivate graduate students interested in learning more about them. This textbook assumes only a basic knowledge of functional analysis, giving the reader a self-contained overview of the ideas and techniques in the development of modern Banach space theory. Special emphasis is placed on the study of the classical Lebesgue spaces Lp (and their sequence space analogues) and spaces of continuous f...

  2. Neutron transport simulation (selected topics)

    Energy Technology Data Exchange (ETDEWEB)

    Vaz, P. [Instituto Tecnologico e Nuclear, Estrada Nacional 10, P-2686-953 Sacavem (Portugal)], E-mail: pedrovaz@itn.pt

    2009-10-15

    Neutron transport simulation is usually performed for criticality, power distribution, activation, scattering, dosimetry and shielding problems, among others. During the last fifteen years, innovative technological applications have been proposed (Accelerator Driven Systems, Energy Amplifiers, Spallation Neutron Sources, etc.), involving the utilization of intermediate energies (hundreds of MeV) and high-intensity (tens of mA) proton accelerators impinging in targets of high Z elements. Additionally, the use of protons, neutrons and light ions for medical applications (hadrontherapy) impose requirements on neutron dosimetry-related quantities (such as kerma factors) for biologically relevant materials, in the energy range starting at several tens of MeV. Shielding and activation related problems associated to the operation of high-energy proton accelerators, emerging space-related applications and aircrew dosimetry-related topics are also fields of intense activity requiring as accurate as possible medium- and high-energy neutron (and other hadrons) transport simulation. These applications impose specific requirements on cross-section data for structural materials, targets, actinides and biologically relevant materials. Emerging nuclear energy systems and next generation nuclear reactors also impose requirements on accurate neutron transport calculations and on cross-section data needs for structural materials, coolants and nuclear fuel materials, aiming at improved safety and detailed thermal-hydraulics and radiation damage studies. In this review paper, the state-of-the-art in the computational tools and methodologies available to perform neutron transport simulation is presented. Proton- and neutron-induced cross-section data needs and requirements are discussed. Hot topics are pinpointed, prospective views are provided and future trends identified.

  3. Neutron transport simulation (selected topics)

    Science.gov (United States)

    Vaz, P.

    2009-10-01

    Neutron transport simulation is usually performed for criticality, power distribution, activation, scattering, dosimetry and shielding problems, among others. During the last fifteen years, innovative technological applications have been proposed (Accelerator Driven Systems, Energy Amplifiers, Spallation Neutron Sources, etc.), involving the utilization of intermediate energies (hundreds of MeV) and high-intensity (tens of mA) proton accelerators impinging in targets of high Z elements. Additionally, the use of protons, neutrons and light ions for medical applications (hadrontherapy) impose requirements on neutron dosimetry-related quantities (such as kerma factors) for biologically relevant materials, in the energy range starting at several tens of MeV. Shielding and activation related problems associated to the operation of high-energy proton accelerators, emerging space-related applications and aircrew dosimetry-related topics are also fields of intense activity requiring as accurate as possible medium- and high-energy neutron (and other hadrons) transport simulation. These applications impose specific requirements on cross-section data for structural materials, targets, actinides and biologically relevant materials. Emerging nuclear energy systems and next generation nuclear reactors also impose requirements on accurate neutron transport calculations and on cross-section data needs for structural materials, coolants and nuclear fuel materials, aiming at improved safety and detailed thermal-hydraulics and radiation damage studies. In this review paper, the state-of-the-art in the computational tools and methodologies available to perform neutron transport simulation is presented. Proton- and neutron-induced cross-section data needs and requirements are discussed. Hot topics are pinpointed, prospective views are provided and future trends identified.

  4. What would a data scientist do with 10 seconds on a supercomputer?

    Science.gov (United States)

    Nychka, D. W.

    2014-12-01

    The statistical problems of large climate datasets, the flexibility ofhigh level data languages such as R, and the architectures of currentsupercomputers have motivated a different paradigm for data analysis problems that are amenable to being parallelized. Part of theswitch in thinking is to harness many cores for a short amount of timeto produce interactive-like exploratory data analysis for thespace-time data sets typically encountered in the geosciences. As motivation we consider the near interactive analysis ofdaily observed temperature and rainfall fields for North America over thepast 30 years. For certain kinds of analysis the potential is forspeedups on the order of a factor of a 1000 more and so changestraditional work flows of statistical modeling and inference for largegeophysical datasets.

  5. 27-Gauge Vitrectomy for Symptomatic Vitreous Floaters with Topical Anesthesia

    Science.gov (United States)

    Lin, Zhong; Moonasar, Nived; Wu, Rong Han; Seemongal-Dass, Robin R.

    2017-01-01

    Purpose Traditionally acceptable methods of anesthesia for vitrectomy surgery are quite varied. However, each of these methods has its own potential for complications that can range from minor to severe. The surgery procedure of vitrectomy for symptomatic vitreous floaters is much simpler, mainly reflecting in the nonuse of sclera indentation, photocoagulation, and the apparently short surgery duration. The use of 27-gauge cannulae makes the puncture of the sclera minimally invasive. Hence, retrobulbar anesthesia, due to its rare but severe complications, seemed excessive for this kind of surgery. Method Three cases of 27-gauge, sutureless pars plana vitrectomy for symptomatic vitreous floaters with topical anesthesia are reported. Results The vitrectomy surgeries were successfully performed with topical anesthesia (proparacaine, 0.5%) without operative or postoperative complications. Furthermore, none of the patients experienced apparent pain during or after the surgery. Conclusion Topical anesthesia can be considered for 27-guage vitrectomy in patients with symptomatic vitreous floaters. PMID:28203195

  6. Multilingual Topic Models for Unaligned Text

    CERN Document Server

    Boyd-Graber, Jordan

    2012-01-01

    We develop the multilingual topic model for unaligned text (MuTo), a probabilistic model of text that is designed to analyze corpora composed of documents in two languages. From these documents, MuTo uses stochastic EM to simultaneously discover both a matching between the languages and multilingual latent topics. We demonstrate that MuTo is able to find shared topics on real-world multilingual corpora, successfully pairing related documents across languages. MuTo provides a new framework for creating multilingual topic models without needing carefully curated parallel corpora and allows applications built using the topic model formalism to be applied to a much wider class of corpora.

  7. Topical treatments for cutaneous warts.

    Science.gov (United States)

    Kwok, Chun Shing; Gibbs, Sam; Bennett, Cathy; Holland, Richard; Abbott, Rachel

    2012-09-12

    Viral warts are a common skin condition, which can range in severity from a minor nuisance that resolve spontaneously to a troublesome, chronic condition. Many different topical treatments are available. To evaluate the efficacy of local treatments for cutaneous non-genital warts in healthy, immunocompetent adults and children. We updated our searches of the following databases to May 2011: the Cochrane Skin Group Specialised Register, CENTRAL in The Cochrane Library, MEDLINE (from 2005), EMBASE (from 2010), AMED (from 1985), LILACS (from 1982), and CINAHL (from 1981). We searched reference lists of articles and online trials registries for ongoing trials. Randomised controlled trials (RCTs) of topical treatments for cutaneous non-genital warts. Two authors independently selected trials and extracted data; a third author resolved any disagreements. We included 85 trials involving a total of 8815 randomised participants (26 new studies were included in this update). There was a wide range of different treatments and a variety of trial designs. Many of the studies were judged to be at high risk of bias in one or more areas of trial design.Trials of salicylic acid (SA) versus placebo showed that the former significantly increased the chance of clearance of warts at all sites (RR (risk ratio) 1.56, 95% CI (confidence interval) 1.20 to 2.03). Subgroup analysis for different sites, hands (RR 2.67, 95% CI 1.43 to 5.01) and feet (RR 1.29, 95% CI 1.07 to 1.55), suggested it might be more effective for hands than feet.A meta-analysis of cryotherapy versus placebo for warts at all sites favoured neither intervention nor control (RR 1.45, 95% CI 0.65 to 3.23). Subgroup analysis for different sites, hands (RR 2.63, 95% CI 0.43 to 15.94) and feet (RR 0.90, 95% CI 0.26 to 3.07), again suggested better outcomes for hands than feet. One trial showed cryotherapy to be better than both placebo and SA, but only for hand warts.There was no significant difference in cure rates between

  8. Chinese and Thai Bilingual Topic Detection Online

    Directory of Open Access Journals (Sweden)

    Rang Ziqiang

    2017-01-01

    Full Text Available Bilingual topic detection is a vital application of natural language processing in the Internet plus Era and trend of economic globalization. At present, the method of bilingual topic detection can’t solve the problem of bilingual topic inconsistent distribution. Aiming at the shortcoming, this paper introduces a based on maximal clique method to find bilingual topic detection of Chinese and Thai feature words. First of all, extract the information of news with keywords of each Chinese and Thai documents through the TextRank algorithm. Next, disambiguate by means of the similarity combined with Chinese and Thai dictionary. Then, use credible association rules to cluster Chinese and Thai feature words, which generates maximal clique of bilingual topic. Finally, cluster similar maximal clique of topic to obtain the collection of final topic. According to the needs of users, the method can recommend a bilingual topic of different sizes. The test of Chinese and Thai news texts in January 2016 made good achievement. From the perspective of cross-language word clustering, the algorithm effectively solves the problem of inconsistency of bilingual topic distribution reasonably, and has the advantages of no need to estimate the number of topics and low time complexity, so it is suitable for the application of online discovery in ilingual topic.

  9. Prioritising topics for the undergraduate ENT curriculum.

    Science.gov (United States)

    Constable, J D; Moghul, G A; Leighton, P; Schofield, S J; Daniel, M

    2017-07-01

    Knowledge of ENT is important for many doctors, but undergraduate time is limited. This study aimed to identify what is thought about ENT knowledge amongst non-ENT doctors, and the key topics that the curriculum should focus on. Doctors were interviewed about their views of ENT knowledge amongst non-ENT doctors, and asked to identify key topics. These topics were then used to devise a questionnaire, which was distributed to multiple stakeholders in order to identify the key topics. ENT knowledge was generally thought to be poor amongst doctors, and it was recommended that undergraduate ENT topics be kept simple. The highest rated topics were: clinical examination; when to refer; acute otitis media; common emergencies; tonsillitis and quinsy; management of ENT problems by non-ENT doctors; stridor and stertor; otitis externa; and otitis media with effusion. This study identified a number of key ENT topics, and will help to inform future development of ENT curricula.

  10. Prioritizing research topics: a comparison of crowdsourcing and patient registry.

    Science.gov (United States)

    Truitt, Anjali R; Monsell, Sarah E; Avins, Andrew L; Nerenz, David R; Lawrence, Sarah O; Bauer, Zoya; Comstock, Bryan A; Edwards, Todd C; Patrick, Donald L; Jarvik, Jeffrey G; Lavallee, Danielle C

    2017-04-05

    A cornerstone of patient-centered outcome research is direct patient involvement throughout the research process. Identifying and prioritizing research topics is a critical but often overlooked point for involvement, as it guides what research questions are asked. We assess the feasibility of involving individuals with low back pain in identifying and prioritizing research topics using two approaches: an existing patient registry and an online crowdsourcing platform. We compare and contrast the diversity of participants recruited, their responses, and resources involved. Eligible participants completed a survey ranking their five highest priority topics from an existing list and supplying additional topics not previously identified. We analyzed their responses using descriptive statistics and content analysis. The patient registry yielded older (mean age 72.4), mostly White (70%), and well-educated (95% high school diploma or higher) participants; crowdsourcing yielded younger (mean age 36.6 years), mostly White (82%), and well-educated (98% high school diploma or higher) participants. The two approaches resulted in similar research priorities by frequency. Both provided open-ended responses that were useful, in that they illuminate additional and nuanced research topics. Overall, both approaches suggest a preference towards topics related to diagnosis and treatment over other topics. Using a patient registry and crowdsourcing are both feasible recruitment approaches for engagement. Researchers should consider their approach, community, and resources when choosing their recruitment approach, as each approach has its own strengths and weaknesses. These approaches are likely most appropriate to supplement or to complement in-person and ongoing engagement strategies.

  11. Design of charging model using supercomputing CAE cloud platform of user feedback mechanism%基于用户反馈机制的超级计算CAE云平台计费模型设计

    Institute of Scientific and Technical Information of China (English)

    马亿旿; 池鹏; 陈磊; 梁小林; 蔡立军

    2015-01-01

    As the traditional charging model of CAE cloud platform has many shortcomings, such as user behavior and feedback are not considered, the single charging model can not support differentiated services, and it has poor business flexibility, a charging model was proposed based on plug-in in the supercomputer CAE cloud platform and a charging algorithm was put forward based on user feedback mechanism. The plug-in accounting model regards service as a basic unit, and provides different charging solutions for user’s service by a form of plug-in unit, which makes it easy to solve those problems, and to some extent, it strengthens the characteristic of the strong business dynamics of supercomputer CAE cloud platform. The charging algorithm can dynamically adjust the user's charging parameters according to the historical behavior and feedback of user mechanism, and reduce service costs by the activity and the importance of user, which enhances the quality of services and user experience.%针对传统 CAE 云平台中计费算法未考虑用户行为与反馈等缺陷以及传统计费模型的模式单一、无法支撑差异化服务、业务灵活性差等缺点,建立一种插件式的超级计算 CAE 云平台计费模型,提出一种基于用户反馈机制的计费算法。插件式计费模型以服务为基本单位,通过插件的形式为用户的服务提供不同的计费方案,从而解决了传统计费模型的模式单一、灵活性差等缺陷,增强超级计算 CAE 云平台的业务动态性。基于用户反馈的计费算法能够根据用户的历史行为和反馈情况,动态调整用户的计费参数,实现了根据用户的活跃度和重要性来减少服务费用的目的,保证了服务质量,提升了用户体验。

  12. Identifying Topics in Microblogs Using Wikipedia.

    Science.gov (United States)

    Yıldırım, Ahmet; Üsküdarlı, Suzan; Özgür, Arzucan

    2016-01-01

    Twitter is an extremely high volume platform for user generated contributions regarding any topic. The wealth of content created at real-time in massive quantities calls for automated approaches to identify the topics of the contributions. Such topics can be utilized in numerous ways, such as public opinion mining, marketing, entertainment, and disaster management. Towards this end, approaches to relate single or partial posts to knowledge base items have been proposed. However, in microblogging systems like Twitter, topics emerge from the culmination of a large number of contributions. Therefore, identifying topics based on collections of posts, where individual posts contribute to some aspect of the greater topic is necessary. Models, such as Latent Dirichlet Allocation (LDA), propose algorithms for relating collections of posts to sets of keywords that represent underlying topics. In these approaches, figuring out what the specific topic(s) the keyword sets represent remains as a separate task. Another issue in topic detection is the scope, which is often limited to specific domain, such as health. This work proposes an approach for identifying domain-independent specific topics related to sets of posts. In this approach, individual posts are processed and then aggregated to identify key tokens, which are then mapped to specific topics. Wikipedia article titles are selected to represent topics, since they are up to date, user-generated, sophisticated articles that span topics of human interest. This paper describes the proposed approach, a prototype implementation, and a case study based on data gathered during the heavily contributed periods corresponding to the four US election debates in 2012. The manually evaluated results (0.96 precision) and other observations from the study are discussed in detail.

  13. Pentyl Gallate Nanoemulsions as Potential Topical Treatment of Herpes Labialis.

    Science.gov (United States)

    Kelmann, Regina G; Colombo, Mariana; De Araújo Lopes, Sávia Caldeira; Nunes, Ricardo J; Pistore, Morgana; Dall Agnol, Daniele; Rigotto, Caroline; Silva, Izabella Thais; Roman, Silvane S; Teixeira, Helder F; Oliveira Simões, Cláudia M; Koester, Letícia S

    2016-07-01

    Previous studies have demonstrated the antiherpes activity of pentyl gallate (PG), suggesting that it could be a promising candidate for the topical treatment of human herpes labialis. PG low aqueous solubility represents a major drawback to its incorporation in topical dosage forms. Hence, the feasibility of incorporating PG into nanoemulsions, the ability to penetrate the skin, to inhibit herpes simplex virus (HSV)-1 replication, and to cause dermal sensitization or toxicity were evaluated. Oil/water nanoemulsions containing 0.5% PG were prepared by spontaneous emulsification. The in vitro PG distribution into porcine ear skin after topical application of nanoemulsions was assessed, and the in vitro antiviral activity against HSV-1 replication was evaluated. Acute dermal toxicity and risk of dermal sensitization were evaluated in rat model. Nanoemulsions presented nanometric particle size (from 124.8 to 143.7 nm), high zeta potential (from -50.1 to -66.1 mV), loading efficiency above 99%, and adequate stability during 12 months. All formulations presented anti-HSV-1 activity. PG was able to reach deeper into the dermis more efficiently from the nanoemulsion F4. This formulation as well as PG were considered safe for topical use. Nanoemulsions seem to be a safe and effective approach for topically delivering PG in the treatment of human herpes labialis infection.

  14. Markers of topical discourse in child-directed speech.

    Science.gov (United States)

    Rohde, Hannah; Frank, Michael C

    2014-01-01

    Although the language we encounter is typically embedded in rich discourse contexts, many existing models of processing focus largely on phenomena that occur sentence-internally. Similarly, most work on children's language learning does not consider how information can accumulate as a discourse progresses. Research in pragmatics, however, points to ways in which each subsequent utterance provides new opportunities for listeners to infer speaker meaning. Such inferences allow the listener to build up a representation of the speakers' intended topic and more generally to identify relationships, structures, and messages that extend across multiple utterances. We address this issue by analyzing a video corpus of child-caregiver interactions. We use topic continuity as an index of discourse structure, examining how caregivers introduce and discuss objects across utterances. For the analysis, utterances are grouped into topical discourse sequences using three annotation strategies: raw annotations of speakers' referents, the output of a model that groups utterances based on those annotations, and the judgments of human coders. We analyze how the lexical, syntactic, and social properties of caregiver-child interaction change over the course of a sequence of topically related utterances. Our findings suggest that many cues used to signal topicality in adult discourse are also available in child-directed speech.

  15. Topics in Number Theory Conference

    CERN Document Server

    Andrews, George; Ono, Ken

    1999-01-01

    From July 31 through August 3,1997, the Pennsylvania State University hosted the Topics in Number Theory Conference. The conference was organized by Ken Ono and myself. By writing the preface, I am afforded the opportunity to express my gratitude to Ken for beng the inspiring and driving force behind the whole conference. Without his energy, enthusiasm and skill the entire event would never have occurred. We are extremely grateful to the sponsors of the conference: The National Sci­ ence Foundation, The Penn State Conference Center and the Penn State Depart­ ment of Mathematics. The object in this conference was to provide a variety of presentations giving a current picture of recent, significant work in number theory. There were eight plenary lectures: H. Darmon (McGill University), "Non-vanishing of L-functions and their derivatives modulo p. " A. Granville (University of Georgia), "Mean values of multiplicative functions. " C. Pomerance (University of Georgia), "Recent results in primality testing. " C. ...

  16. Decision Point 1 Topical Report

    Energy Technology Data Exchange (ETDEWEB)

    Yablonsky, Al; Barsoumian, Shant; Legere, David

    2013-05-01

    This Topical Report addresses accomplishments achieved during Phase 2a of the SkyMine® Carbon Mineralization Pilot Project. The primary objectives of this project are to design, construct, and operate a system to capture CO2 from a slipstream of flue gas from a commercial coal-fired cement kiln, convert that CO2 to products having commercial value (i.e., beneficial use), show the economic viability of the CO2 capture and conversion process, and thereby advance the technology to the point of readiness for commercial scale demonstration and proliferation. The overall process is carbon negative, resulting in mineralization of CO2 that would otherwise be released into the atmosphere. The project will also substantiate market opportunities for the technology by sales of chemicals into existing markets, and identify opportunities to improve technology performance and reduce costs at the commercial scale. The project is being conducted in two phases. The primary objectives of Phase 1 were to elaborate proven SkyMine® process chemistry to commercial pilot-scale operation and complete the preliminary design for the pilot plant to be built and operated in Phase 2, complete a NEPA evaluation, and develop a comprehensive carbon life cycle analysis. The objective of the current Phase (2a) is to complete the detailed design of the pilot plant to be built in Phase 2b.

  17. Topics in combinatorial pattern matching

    DEFF Research Database (Denmark)

    Vildhøj, Hjalte Wedel

    Problem. Given m documents of total length n, we consider the problem of finding a longest string common to at least d ≥ 2 of the documents. This problem is known as the longest common substring (LCS) problem and has a classic O(n) space and O(n) time solution (Weiner [FOCS’73], Hui [CPM’92]). However...

  18. Recent Advances In Topical Therapy In Dermatology

    Directory of Open Access Journals (Sweden)

    Mohan Thappa Devinder

    2003-01-01

    Full Text Available With changing times various newer topical agents are introduced in the field of dermatology. Tacrolimus and pimecrolimus are immunisuppressants, which are effective topically and are tried in the management of atopic dermatitis as well as other disorders including allergic contact dermatitis, atrophic lichen planus, pyoderma gangrenosum. Imiquimod, an immune response modifier, is presently in use for genital warts but has potentials as anti- tumour agent and in various other dermatological conditions when used topically. Tazarotene is a newer addition to the list of topical reginoids, which is effective in psoriasis and has better effect in combination with calcipotriene, phototherapy and topical costicosteroids. Tazarotene and adapelene are also effective in inflammatory acne. Calcipotriol, a vitamin D analogue has been introduced as a topical agent in the treatment of psoriasis. Steroid components are also developed recently which will be devoid of the side effects but having adequate anti-inflammatory effect. Topical photodynamic therapy has also a wide range of use in dermatology. Newer topical agents including cidofovir, capsaicin, topical sensitizers, topical antifungal agents for onychomycosis are also of use in clinical practice. Other promising developments include skin substitutes and growth factors for wound care.

  19. Statistical correlations and risk analyses techniques for a diving dual phase bubble model and data bank using massively parallel supercomputers.

    Science.gov (United States)

    Wienke, B R; O'Leary, T R

    2008-05-01

    Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.

  20. Propylene glycol: an often unrecognized cause of allergic contact dermatitis in patients using topical corticosteroids.

    Science.gov (United States)

    Al Jasser, M; Mebuke, N; de Gannes, G C

    2011-05-01

    Propylene glycol (PG) is considered to be a ubiquitous formulary ingredient used in many personal care products and pharmaceutical preparations. It is an organic compound commonly found in topical corticosteroids (CS). Cutaneous reactions to PG are mostly irritant, but allergic contact dermatitis to PG is well-documented. Cosensitization to PG and topical CS can occur, making it challenging to choose the appropriate topical CS in a PG-allergic patient. This review is aimed at guiding clinicians in the selection of a suitable topical corticosteroid when presented with patients allergic to PG.

  1. Topical ocular delivery of fluoroquinolones.

    Science.gov (United States)

    Pawar, Pravin; Katara, Rajesh; Mishra, Sushil; Majumdar, Dipak K

    2013-05-01

    Topical fluoroquinolones are used in ophthalmology to treat ocular infections. They are bactericidal and inhibit bacterial DNA replication by inhibiting DNA gyrase and topoisomerase. Fluoroquinolones possess two ionizable groups: a carboxylic group (pKa1 = 5.5 - 6.34) and a heterocyclic group (pKa2 = 7.6 - 9.3), in the nucleus, which acquire charge at pH above and below the isoelectric point (pI = 6.75 - 7.78). At isoelectric point, fluoroquinolones remain unionized and show enhanced corneal penetration but exhibit reduced aqueous solubility and the drug may precipitate from aqueous solution. Aqueous ophthalmic solutions of fluoroquinolones are obtained by using hydrochloride or mesylate salt which is acidic and irritating to the eyes. Hence, pH of the solution is kept between 5 and 7 to ensure aqueous solubility and minimum ocular irritation. This review gives an overview of various physicochemical and formulation factors affecting the ocular delivery of fluoroquinolones and strategies for getting higher ocular bioavailability for ocular delivery of fluoroquinolones. These strategies could be employed to improve efficacy of fluoroquinolones in eye preparation. Broad-spectrum antibacterials, such as the ophthalmic fluoroquinolones, are powerful weapons for treating and preventing potentially sight-threatening infections. The fourth-generation fluoroquinolones have quickly assumed an outstanding place in the ophthalmic applications. Especially valuable for their broad-spectrum coverage against Gram-positive and Gram-negative organisms, these agents have become the anti-infective of preference for many ophthalmologists. Moxifloxacin seems to be a promising powerful molecule among all fluoroquinolones for treatment of bacterial infections.

  2. Topical ocular delivery of NSAIDs.

    Science.gov (United States)

    Ahuja, Munish; Dhake, Avinash S; Sharma, Surendra K; Majumdar, Dipak K

    2008-06-01

    In ocular tissue, arachidonic acid is metabolized by cyclooxygenase to prostaglandins which are the most important lipid derived mediators of inflammation. Presently nonsteroidal anti-inflammatory drugs (NSAIDs) which are cyclooxygenase (COX) inhibitors are being used for the treatment of inflammatory disorders. NSAIDs used in ophthalmology, topically, are salicylic-, indole acetic-, aryl acetic-, aryl propionic- and enolic acid derivatives. NSAIDs are weak acids with pKa mostly between 3.5 and 4.5, and are poorly soluble in water. Aqueous ophthalmic solutions of NSAIDs have been made using sodium, potassium, tromethamine and lysine salts or complexing with cyclodextrins/solubilizer. Ocular penetration of NSAID demands an acidic ophthalmic solution where cyclodextrin could prevent precipitation of drug and minimize its ocular irritation potential. The incompatibility of NSAID with benzalkonium chloride is avoided by using polysorbate 80, cyclodextrins or tromethamine. Lysine salts and alpha-tocopheryl polyethylene glycol succinate disrupt corneal integrity, and their use requires caution. Thus a nonirritating ophthalmic solution of NSAID could be formulated by dissolving an appropriate water-soluble salt, in the presence of cyclodextrin or tromethamine (if needed) in mildly acidified purified water (if stability permits) with or without benzalkonium chloride and polyvinyl alcohol. Amide prodrugs met with mixed success due to incomplete intraocular hydrolysis. Suspension and ocular inserts appear irritating to the inflamed eye. Oil drop may be a suitable option for insoluble drugs and ointment may be used for sustained effect. Recent studies showed that the use of colloidal nanoparticle formulations and the potent COX 2 inhibitor bromfenac may enhance NSAID efficacy in eye preparations.

  3. Usage-Oriented Topic Maps Building Approach

    Science.gov (United States)

    Ellouze, Nebrasse; Lammari, Nadira; Métais, Elisabeth; Ben Ahmed, Mohamed

    In this paper, we present a collaborative and incremental construction approach of multilingual Topic Maps based on enrichment and merging techniques. In recent years, several Topic Map building approaches have been proposed endowed with different characteristics. Generally, they are dedicated to particular data types like text, semi-structured data, relational data, etc. We note also that most of these approaches take as input monolingual documents to build the Topic Map. The problem is that the large majority of resources available today are written in various languages, and these resources could be relevant even to non-native speakers. Thus, our work is driven towards a collaborative and incremental method for Topic Map construction from textual documents available in different languages. To enrich the Topic Map, we take as input a domain thesaurus and we propose also to explore the Topic Map usage which means available potential questions related to the source documents.

  4. Tracking topic birth and death in LDA.

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Andrew T.; Robinson, David Gerald

    2011-09-01

    Most topic modeling algorithms that address the evolution of documents over time use the same number of topics at all times. This obscures the common occurrence in the data where new subjects arise and old ones diminish or disappear entirely. We propose an algorithm to model the birth and death of topics within an LDA-like framework. The user selects an initial number of topics, after which new topics are created and retired without further supervision. Our approach also accommodates many of the acceleration and parallelization schemes developed in recent years for standard LDA. In recent years, topic modeling algorithms such as latent semantic analysis (LSA)[17], latent Dirichlet allocation (LDA)[10] and their descendants have offered a powerful way to explore and interrogate corpora far too large for any human to grasp without assistance. Using such algorithms we are able to search for similar documents, model and track the volume of topics over time, search for correlated topics or model them with a hierarchy. Most of these algorithms are intended for use with static corpora where the number of documents and the size of the vocabulary are known in advance. Moreover, almost all current topic modeling algorithms fix the number of topics as one of the input parameters and keep it fixed across the entire corpus. While this is appropriate for static corpora, it becomes a serious handicap when analyzing time-varying data sets where topics come and go as a matter of course. This is doubly true for online algorithms that may not have the option of revising earlier results in light of new data. To be sure, these algorithms will account for changing data one way or another, but without the ability to adapt to structural changes such as entirely new topics they may do so in counterintuitive ways.

  5. Male Infertility: MedlinePlus Health Topic

    Science.gov (United States)

    ... Retrograde ejaculation Semen analysis Sperm release pathway Testicular biopsy Related Health Topics Assisted Reproductive Technology Female Infertility Infertility National Institutes of Health The ...

  6. The treatment of rosacea with topical ivermectin.

    Science.gov (United States)

    Ali, S T; Alinia, H; Feldman, S R

    2015-04-01

    The treatment of rosacea is challenging because several pathophysiologic processes may be involved, including neurovascular dysregulation and alterations in innate immune status. Demodex mites may play a role in the latter mechanism. Topical ivermectin is a new therapeutic modality which demonstrates antiparasitic and anti-inflammatory properties. This article reviews published evidence related to the efficacy and safety of topical ivermectin. PubMed was utilized to search for key words "topical ivermectin", "ivermectin cream" and "rosacea". Three clinical trials were found that studied topical ivermectin as a treatment option for rosacea. Ivermectin was effective, safe and well tolerated.

  7. Erectile Dysfunction: MedlinePlus Health Topic

    Science.gov (United States)

    ... ENCYCLOPEDIA Drugs that may cause impotence Erection problems Erection problems - aftercare Prolactin blood test Related Health Topics Penis Disorders National Institutes of Health The primary NIH ...

  8. Lattice theory special topics and applications

    CERN Document Server

    Wehrung, Friedrich

    2014-01-01

    George Grätzer's Lattice Theory: Foundation is his third book on lattice theory (General Lattice Theory, 1978, second edition, 1998). In 2009, Grätzer considered updating the second edition to reflect some exciting and deep developments. He soon realized that to lay the foundation, to survey the contemporary field, to pose research problems, would require more than one volume and more than one person. So Lattice Theory: Foundation provided the foundation. Now we complete this project with Lattice Theory: Special Topics and Applications, written by a distinguished group of experts, to cover some of the vast areas not in Foundation. This first volume is divided into three parts. Part I. Topology and Lattices includes two chapters by Klaus Keimel, Jimmie Lawson and Ales Pultr, Jiri Sichler. Part II. Special Classes of Finite Lattices comprises four chapters by Gabor Czedli, George Grätzer and Joseph P. S. Kung. Part III. Congruence Lattices of Infinite Lattices and Beyond includes four chapters by Friedrich W...

  9. Steam injections wells: topics to consider in casing design of steam injection wells; Revestimento para pocos de vapor

    Energy Technology Data Exchange (ETDEWEB)

    Conceicao, Antonio Carlos Farias [PETROBRAS, Recife, PE (Brazil). Gerencia de Perfuracao do Nordeste. Div. de Operacoes

    1994-07-01

    Steam injection is one of the processes used to increase production from very viscous oil reservoirs. A well is completed at a temperature of about 110 deg F and during steam injection that temperature varies around 600 deg F. Strain or breakdowns may occur to the casing, due to the critical conditions generated by the change of temperature. The usual casing design methods, do not take into account special environmental conditions, such as those which exist for steam injection. From the results of this study we come up to the conclusion that casing grade K-55, heavy weight with premium connections, without pre-stressing and adequately heated, is the best option for steam injection well completion for most of the fields in Brazil. (author)

  10. An Evolutionary Game Model of Multi-Topics Diffusion in Social Network

    Directory of Open Access Journals (Sweden)

    Su Jia-Hao

    2017-01-01

    Full Text Available One major function of social networks is the dissemination of information such as news, comments, and rumors. The information passing from a sender to a receiver intrinsically involves both of them by considering their memory, reputation, and preference, which further determine their decisions of whether or not to diffuse the topic. To understand such human aspects of the topics dissemination, we propose a game theoretical model of the multi-topics diffusion mechanisms in a social network. Each individual in the network is considered as both sender and receiver, who transmits different topics taking into account their payoffs and personalities (including memories, reputation and preferences. Several cases were analyzed, and the results suggest that multi-topics dissemination is strongly affected by self-perceived, gregarious and information gain.

  11. The refinement of topics for systematic reviews: lessons and recommendations from the Effective Health Care Program.

    Science.gov (United States)

    Buckley, David I; Ansari, Mohammed T; Butler, Mary; Soh, Clara; Chang, Christine S

    2014-04-01

    The Agency for Healthcare Research and Quality (AHRQ) Effective Health Care Program conducts systematic reviews of health-care topics nominated by stakeholders. Topics undergo refinement to ensure relevant questions of appropriate scope and useful reviews. Input from key informants, experts, and a literature scan informs changes in the nominated topic. AHRQ convened a work group to assess approaches and develop recommendations for topic refinement. Work group members experienced in topic refinement generated a list of questions and guiding principles relevant to the refinement process. They discussed each issue and reached agreement on recommendations. Topics should address important health-care questions or dilemmas, consider stakeholder priorities and values, reflect the state of the science, and be consistent with systematic review research methods. Guiding principles of topic refinement are fidelity to the nomination, relevance, research feasibility, responsiveness to stakeholder inputs, reduced investigator bias, transparency, and suitable scope. Suggestions for stakeholder engagement, synthesis of input, and reporting are discussed. Refinement decisions require judgment and balancing guiding principles. Variability in topics precludes a prescriptive approach. Accurate, rigorous, and useful systematic reviews require well-refined topics. These guiding principles and methodological recommendations may help investigators refine topics for reviews. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Earthquake forecasting: a possible solution considering the GPS ionospheric delay

    Directory of Open Access Journals (Sweden)

    M. De Agostino

    2011-12-01

    Full Text Available The recent earthquakes in L'Aquila (Italy and in Japan have dramatically emphasized the problem of natural disasters and their correct forecasting. One of the aims of the research community is to find a possible and reliable forecasting method, considering all the available technologies and tools. Starting from the recently developed research concerning this topic and considering that the number of GPS reference stations around the world is continuously increasing, this study is an attempt to investigate whether it is possible to use GPS data in order to enhance earthquake forecasting. In some cases, ionospheric activity level increases just before to an earthquake event and shows a different behaviour 5–10 days before the event, when the seismic event has a magnitude greater than 4–4.5 degrees. Considering the GPS data from the reference stations located around the L'Aquila area (Italy, an analysis of the daily variations of the ionospheric signal delay has been carried out in order to evaluate a possible correlation between seismic events and unexpected variations of ionospheric activities. Many different scenarios have been tested, in particular considering the elevation angles, the visibility lengths and the time of day (morning, afternoon or night of the satellites. In this paper, the contribution of the ionospheric impact has been shown: a realistic correlation between ionospheric delay and earthquake can be seen about one week before the seismic event.

  13. New Mexico High School Supercomputing Challenge, 1990--1995: Five years of making a difference to students, teachers, schools, and communities. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Foster, M.; Kratzer, D.

    1996-02-01

    The New Mexico High School Supercomputing Challenge is an academic program dedicated to increasing interest in science and math among high school students by introducing them to high performance computing. This report provides a summary and evaluation of the first five years of the program, describes the program and shows the impact that it has had on high school students, their teachers, and their communities. Goals and objectives are reviewed and evaluated, growth and development of the program are analyzed, and future directions are discussed.

  14. Aerodynamics of wind turbines emerging topics

    CERN Document Server

    Amano, R S

    2014-01-01

    Focusing on Aerodynamics of Wind Turbines with topics ranging from Fundamental to Application of horizontal axis wind turbines, this book presents advanced topics including: Basic Theory for Wind turbine Blade Aerodynamics, Computational Methods, and Special Structural Reinforcement Technique for Wind Turbine Blades.

  15. Histiocytosis X: treatment with topical nitrogen mustard.

    Science.gov (United States)

    Berman, B; Chang, D L; Shupack, J L

    1980-07-01

    The case histories of two elderly patients with cutaneous histiocytosis X treated topically with nitrogen mustard are presented. The cutaneous lesions cleared within 2 to 3 weeks, and remission was maintained with daily topical administration of nitrogen mustard. The clinical impression of improvement was substantiated by light and electron microscopic studies prior to and after therapy.

  16. Nuffield Advanced Chemistry Courses Topic 10

    Science.gov (United States)

    Education in Science, 1975

    1975-01-01

    Presents an alternative series of investigations replacing the propanone-trichloromethane system (Topic 10) in the Nuffield A-Level course. A trichloromethane-ethyl ethanoate (acetate) system presented in four experiments give good results and removed the dangers arising from the other system. Topic 10 need not be re-written, just the replacement…

  17. Corneal staining after treatment with topical tetracycline

    NARCIS (Netherlands)

    R. Lapid-Gortzak; C.P. Nieuwendaal; A.R. Slomovic; L. Spanjaard

    2006-01-01

    Purpose: The purpose of this paper is to report a case of corneal staining after treatment with topical tetracycline. Methods: A patient with crystalline keratopathy caused by Streptococcus viridans after corneal transplantation was treated topically with tetracycline eye drops, based on results of

  18. Psoriasis: improving adherence to topical therapy.

    NARCIS (Netherlands)

    Feldman, S.R.; Horn, E.J.; Balkrishnan, R.; Basra, M.K.; Finlay, A.Y.; McCoy, D.; Menter, A.; Kerkhof, P.C.M. van de

    2008-01-01

    Topical therapy has an important role in psoriasis treatment. It is efficacious and has a favorable safety profile as demonstrated in clinical trials. However, poor treatment outcomes from topical therapy regimens likely result from poor adherence and ineffective use of the medication. The Internati

  19. Prioritizing dermatoses: rationally selecting guideline topics

    NARCIS (Netherlands)

    Borgonjen, R.J.; Everdingen, J.J. van; Kerkhof, P.C.M. van de; Spuls, P.I.

    2015-01-01

    BACKGROUND: Clinical practice guideline (CPG) development starts with selecting appropriate topics, as resources to develop a guideline are limited. However, a standardized method for topic selection is commonly missing and the way different criteria are used to prioritize is not clear. OBJECTIVES:

  20. Fostering Topic Knowledge: Essential for Academic Writing

    Science.gov (United States)

    Proske, Antje; Kapp, Felix

    2013-01-01

    Several researchers emphasize the role of the writer's topic knowledge for writing. In academic writing topic knowledge is often constructed by studying source texts. One possibility to support that essential phase of the writing process is to provide interactive learning questions which facilitate the construction of an adequate situation…

  1. Fostering Topic Knowledge: Essential for Academic Writing

    Science.gov (United States)

    Proske, Antje; Kapp, Felix

    2013-01-01

    Several researchers emphasize the role of the writer's topic knowledge for writing. In academic writing topic knowledge is often constructed by studying source texts. One possibility to support that essential phase of the writing process is to provide interactive learning questions which facilitate the construction of an adequate situation…

  2. Severe photosensitivity reaction induced by topical diclofenac

    OpenAIRE

    Akat, Pramod B.

    2013-01-01

    Albeit uncommon, photosensitivity reaction induced by diclofenac can be an unfortunate adverse reaction complicating its use as a topical analgesic. We here present a case of a patient who suffered such a reaction as a result of exposure to diclofenac, employed as a topical analgesic for low backache. The lesions healed with conservative management without extensive scarring or other complications.

  3. Topic Maps Based Project Knowledge Management

    Institute of Scientific and Technical Information of China (English)

    Wu Xiaofan; Zhou Liang; Zhang Lei; Li Lingzhi; Ding Qiulin

    2006-01-01

    Based on topic maps, a preprocessing scheme using similarity comparision is presented and applied in knowledge management.Topic and occurrence-oriented merging algorithm is also introduced to implement knowledge integration for the sub-system. An Omnigator-supported example from an aeroaustic institute is utilised to validate the preprocessing method and the result indicates it can speed up the research schedule.

  4. Severe photosensitivity reaction induced by topical diclofenac

    Directory of Open Access Journals (Sweden)

    Pramod B Akat

    2013-01-01

    Full Text Available Albeit uncommon, photosensitivity reaction induced by diclofenac can be an unfortunate adverse reaction complicating its use as a topical analgesic. We here present a case of a patient who suffered such a reaction as a result of exposure to diclofenac, employed as a topical analgesic for low backache. The lesions healed with conservative management without extensive scarring or other complications.

  5. Infantile generalized hypertrichosis caused by topical minoxidil*

    Science.gov (United States)

    Rampon, Greice; Henkin, Caroline; de Souza, Paulo Ricardo Martins; de Almeida Jr, Hiram Larangeira

    2016-01-01

    Rare cases of hypertrichosis have been associated with topically applied minoxidil. We present the first reported case in the Brazilian literature of generalized hypertrichosis affecting a 5-year-old child, following use of minoxidil 5%, 20 drops a day, for hair loss. The laboratory investigation excluded hyperandrogenism and thyroid dysfunction. Topical minoxidil should be used with caution in children. PMID:26982785

  6. Infantile generalized hypertrichosis caused by topical minoxidil.

    Science.gov (United States)

    Rampon, Greice; Henkin, Caroline; de Souza, Paulo Ricardo Martins; Almeida, Hiram Larangeira de

    2016-01-01

    Rare cases of hypertrichosis have been associated with topically applied minoxidil. We present the first reported case in the Brazilian literature of generalized hypertrichosis affecting a 5-year-old child, following use of minoxidil 5%, 20 drops a day, for hair loss. The laboratory investigation excluded hyperandrogenism and thyroid dysfunction. Topical minoxidil should be used with caution in children.

  7. Topical cholesterol in clofazimine induced ichthyosis

    Directory of Open Access Journals (Sweden)

    Pandey S

    1994-01-01

    Full Text Available Topical application of 10% cholesterol in petrolatum significantly (P< 0.05 controlled the development of ichthyosis in 62 patients taking 100 mg clofazimine daily for a period of 3 months. However, topical cholesterol application did not affect the lowering of serum cholesterol induced by oral clofazimine. Probable mechanism of action is being discussed.

  8. Topic Prominence in Chinese EFL Learners' Interlanguage

    Science.gov (United States)

    Li, Shaopeng; Yang, Lianrui

    2014-01-01

    The present study aims to investigate the general characteristics of topicprominent typological interlanguage development of Chinese learners of English in terms of acquiring subject-prominent English structures from a discourse perspective. Topic structures mainly appear in Chinese discourse in the form of topic chains (Wang, 2002; 2004). The…

  9. Prioritizing dermatoses: rationally selecting guideline topics

    NARCIS (Netherlands)

    Borgonjen, R.J.; Everdingen, J.J. van; Kerkhof, P.C.M. van de; Spuls, P.I.

    2015-01-01

    BACKGROUND: Clinical practice guideline (CPG) development starts with selecting appropriate topics, as resources to develop a guideline are limited. However, a standardized method for topic selection is commonly missing and the way different criteria are used to prioritize is not clear. OBJECTIVES:

  10. Generating focused topic-specific sentiment lexicons

    NARCIS (Netherlands)

    Jijkoun, V.; de Rijke, M.; Weerkamp, W.

    2010-01-01

    We present a method for automatically generating focused and accurate topic-specific subjectivity lexicons from a general purpose polarity lexicon that allow users to pin-point subjective on-topic information in a set of relevant documents. We motivate the need for such lexicons in the field of medi

  11. Correlated Topic Model for Web Services Ranking

    Directory of Open Access Journals (Sweden)

    Mustapha AZNAG

    2013-07-01

    Full Text Available With the increasing number of published Web services providing similar functionalities, it’s very tedious for a service consumer to make decision to select the appropriate one according to her/his needs. In this paper, we explore several probabilistic topic models: Probabilistic Latent Semantic Analysis (PLSA, Latent Dirichlet Allocation (LDA and Correlated Topic Model (CTM to extract latent factors from web service descriptions. In our approach, topic models are used as efficient dimension reduction techniques, which are able to capture semantic relationships between word-topic and topic-service interpreted in terms of probability distributions. To address the limitation of keywords-based queries, we represent web service description as a vector space and we introduce a new approach for discovering and ranking web services using latent factors. In our experiment, we evaluated our Service Discovery and Ranking approach by calculating the precision (P@n and normalized discounted cumulative gain (NDCGn.

  12. Topical antioxidants in radiodermatitis: a clinical review.

    Science.gov (United States)

    Kodiyan, Joyson; Amber, Kyle T

    2015-09-01

    Radiation-induced skin toxicity is the most prevalent side effect of radiation therapy. Not only does it have a significant effect on patients' quality of life, but it also results in poor follow-up and early termination of radiotherapy treatment. Several skin care practices and topical applications have been studied in the field of radiodermatitis, including skin washing, topical steroids, and mechanical skin barriers. Aside from these methods, many patients turn to complementary and alternative medicine for the prevention and treatment of radiodermatitis. Many of these alternative therapies are topically applied antioxidants. While the rationale behind the use of antioxidants in treating radiodermatitis is strong, clinical studies have been far less consistent. Even in large scale randomised controlled trials, findings have been limited by the inconsistent use of topical vehicles and placebos. In this article, the authors review the role of topical antioxidants to better help the practitioner navigate through different available skin directed antioxidants.

  13. Antimicrobial activity of topical skin pharmaceuticals - an in vitro study.

    Science.gov (United States)

    Alsterholm, Mikael; Karami, Nahid; Faergemann, Jan

    2010-05-01

    The aim of this study was to investigate the antimicrobial activity of currently available topical skin pharmaceuticals against Candida albicans, Escherichia coli, Staphylococcus aureus, Staphylococcus epidermidis and Streptococcus pyogenes. The agar dilution assay was used to determine the minimal inhibitory concentration for cream formulations and their active substances. Corticosteroid formulations with the antiseptics clioquinol or halquinol were active against all microbes. The hydrogen peroxide formulation was primarily active against staphylococci. Clotrimazole, miconazole and econazole showed an effect against staphylococci in addition to their effect on C. albicans. In contrast, terbinafine had no antibacterial effect. Fusidic acid was active against staphylococci, with slightly weaker activity against S. pyogenes and no activity against C. albicans or E. coli. In summary, some topical skin pharmaceuticals have broad antimicrobial activity in vitro, clioquinol and halquinol being the most diverse. In limited superficial skin infection topical treatment can be an alternative to systemic antibiotics and should be considered. With the global threat of multi-resistant bacteria there is a need for new, topical, non-resistance-promoting, antimicrobial preparations for the treatment of skin infections.

  14. Recent advances and perspectives in topical oral anesthesia.

    Science.gov (United States)

    Franz-Montan, Michelle; Ribeiro, Lígia Nunes de Morais; Volpato, Maria Cristina; Cereda, Cintia Maria Saia; Groppo, Francisco Carlos; Tofoli, Giovana Randomille; de Araújo, Daniele Ribeiro; Santi, Patrizia; Padula, Cristina; de Paula, Eneida

    2017-05-01

    Topical anesthesia is widely used in dentistry to reduce pain caused by needle insertion and injection of the anesthetic. However, successful anesthesia is not always achieved using the formulations that are currently commercially available. As a result, local anesthesia is still one of the procedures that is most feared by dental patients. Drug delivery systems (DDSs) provide ways of improving the efficacy of topical agents. Areas covered: An overview of the structure and permeability of oral mucosa is given, followed by a review of DDSs designed for dental topical anesthesia and their related clinical trials. Chemical approaches to enhance permeation and anesthesia efficacy, or to promote superficial anesthesia, include nanostructured carriers (liposomes, cyclodextrins, polymeric nanoparticle systems, solid lipid nanoparticles, and nanostructured lipid carriers) and different pharmaceutical dosage forms (patches, bio- and mucoadhesive systems, and hydrogels). Physical methods include pre-cooling, vibration, iontophoresis, and microneedle arrays. Expert opinion: The combination of different chemical and physical methods is an attractive option for effective topical anesthesia in oral mucosa. This comprehensive review should provide the readers with the most relevant options currently available to assist pain-free dental anesthesia. The findings should be considered for future clinical trials.

  15. Super-computer architecture

    CERN Document Server

    Hockney, R W

    1977-01-01

    This paper examines the design of the top-of-the-range, scientific, number-crunching computers. The market for such computers is not as large as that for smaller machines, but on the other hand it is by no means negligible. The present work-horse machines in this category are the CDC 7600 and IBM 360/195, and over fifty of the former machines have been sold. The types of installation that form the market for such machines are not only the major scientific research laboratories in the major countries-such as Los Alamos, CERN, Rutherford laboratory-but also major universities or university networks. It is also true that, as with sports cars, innovations made to satisfy the top of the market today often become the standard for the medium-scale computer of tomorrow. Hence there is considerable interest in examining present developments in this area. (0 refs).

  16. The GF11 supercomputer

    Science.gov (United States)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1987-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating-point processors arranged in a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics.

  17. Associative Memories for Supercomputers

    Science.gov (United States)

    1992-12-01

    Transform (FFT) is computed. The real part is extracted and a bias equal to its minimum is added to it in order to make all the values positive. Each...Transform (FM) is computed. The real part is extracted and a bias equal to its minimum is added to it in order to make all the values positive. Each...masque numero un de Figure 12: Photographic de Ia reconstruction obtenuc avec Ia plaquc IOCDL correspondant k Ia phase binaire. en rotition, montrant

  18. Power-constrained supercomputing

    Science.gov (United States)

    Bailey, Peter E.

    As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound. Adaptive power balancing efficiently predicts where critical paths are likely to occur and distributes power to those paths. Greater power, in turn, allows increased thread concurrency levels, CPU frequency/voltage, or both. We describe these techniques in detail and show that, compared to the state-of-the-art technique of using statically predetermined, per-node power caps, Conductor leads to a best-case performance improvement of up to 30%, and an average improvement of 19.1%. At the node level, an accurate power/performance model will aid in selecting the right configuration from a large set of available configurations. We present a novel approach to generate such a model offline using kernel clustering and multivariate linear regression. Our model requires only two iterations to select a configuration, which provides a significant advantage over exhaustive search-based strategies. We apply our model to predict power and performance for different applications using arbitrary configurations, and show that our model, when used with hardware frequency-limiting in a runtime system, selects configurations with significantly higher performance at a given power limit than those chosen by frequency-limiting alone. When applied to a set of 36 computational kernels from a range of applications, our model accurately predicts power and performance; our runtime system based on the model maintains 91% of optimal performance while meeting power constraints 88% of the time. When the runtime system violates a power constraint, it exceeds the constraint by only 6% in the average case, while simultaneously achieving 54% more performance than an oracle. Through the combination of the above contributions, we hope to provide guidance and inspiration to research practitioners working on runtime systems for power-constrained environments. We also hope this dissertation will draw attention to the need for software and runtime-controlled power management under power constraints at various levels, from the processor level to the cluster level.

  19. Topics in Matrix Sampling Algorithms

    CERN Document Server

    Boutsidis, Christos

    2011-01-01

    We study three fundamental problems of Linear Algebra, lying in the heart of various Machine Learning applications, namely: 1)"Low-rank Column-based Matrix Approximation". We are given a matrix A and a target rank k. The goal is to select a subset of columns of A and, by using only these columns, compute a rank k approximation to A that is as good as the rank k approximation that would have been obtained by using all the columns; 2) "Coreset Construction in Least-Squares Regression". We are given a matrix A and a vector b. Consider the (over-constrained) least-squares problem of minimizing ||Ax-b||, over all vectors x in D. The domain D represents the constraints on the solution and can be arbitrary. The goal is to select a subset of the rows of A and b and, by using only these rows, find a solution vector that is as good as the solution vector that would have been obtained by using all the rows; 3) "Feature Selection in K-means Clustering". We are given a set of points described with respect to a large numbe...

  20. Content Volatility of Scientific Topics in Wikipedia: A Cautionary Tale.

    Directory of Open Access Journals (Sweden)

    Adam M Wilson

    Full Text Available Wikipedia has quickly become one of the most frequently accessed encyclopedic references, despite the ease with which content can be changed and the potential for 'edit wars' surrounding controversial topics. Little is known about how this potential for controversy affects the accuracy and stability of information on scientific topics, especially those with associated political controversy. Here we present an analysis of the Wikipedia edit histories for seven scientific articles and show that topics we consider politically but not scientifically "controversial" (such as evolution and global warming experience more frequent edits with more words changed per day than pages we consider "noncontroversial" (such as the standard model in physics or heliocentrism. For example, over the period we analyzed, the global warming page was edited on average (geometric mean ±SD 1.9±2.7 times resulting in 110.9±10.3 words changed per day, while the standard model in physics was only edited 0.2±1.4 times resulting in 9.4±5.0 words changed per day. The high rate of change observed in these pages makes it difficult for experts to monitor accuracy and contribute time-consuming corrections, to the possible detriment of scientific accuracy. As our society turns to Wikipedia as a primary source of scientific information, it is vital we read it critically and with the understanding that the content is dynamic and vulnerable to vandalism and other shenanigans.

  1. Content Volatility of Scientific Topics in Wikipedia: A Cautionary Tale.

    Science.gov (United States)

    Wilson, Adam M; Likens, Gene E

    2015-01-01

    Wikipedia has quickly become one of the most frequently accessed encyclopedic references, despite the ease with which content can be changed and the potential for 'edit wars' surrounding controversial topics. Little is known about how this potential for controversy affects the accuracy and stability of information on scientific topics, especially those with associated political controversy. Here we present an analysis of the Wikipedia edit histories for seven scientific articles and show that topics we consider politically but not scientifically "controversial" (such as evolution and global warming) experience more frequent edits with more words changed per day than pages we consider "noncontroversial" (such as the standard model in physics or heliocentrism). For example, over the period we analyzed, the global warming page was edited on average (geometric mean ±SD) 1.9±2.7 times resulting in 110.9±10.3 words changed per day, while the standard model in physics was only edited 0.2±1.4 times resulting in 9.4±5.0 words changed per day. The high rate of change observed in these pages makes it difficult for experts to monitor accuracy and contribute time-consuming corrections, to the possible detriment of scientific accuracy. As our society turns to Wikipedia as a primary source of scientific information, it is vital we read it critically and with the understanding that the content is dynamic and vulnerable to vandalism and other shenanigans.

  2. Women's health topics in dental hygiene curricula.

    Science.gov (United States)

    Gibson-Howell, Joan C

    2010-01-01

    Minimal inclusion of women's health topics in dental and dental hygiene curricula may not prepare dental health care workers to provide comprehensive health care to females. The purposes of these surveys in 2001 and 2007 were to investigate United States dental hygiene school curricula regarding inclusion of women's health topics in differing degree programs (associate/certificate, baccalaureate, associate/baccalaureate) and course status (required or elective). The surveys also identified sources used to obtain women's health topics, assessed faculty continuing education participation in women's health, determined satisfaction with current curricula, questioned if change was anticipated and if so in what topics, identified where students apply their knowledge about women's health and in what ways and reported progress of dental hygiene curricula over the 6 year time period. Surveys were sent to dental hygiene program directors in 2001 (N=256) and in 2007 (N=288) asking them to complete the questionnaire. There was no statistically significant association between 2001 and 2007 survey results by degree or program setting. The educational issue, women's general health continuing education courses/topics completed by dental hygiene faculty in the past 2 years, showed a statistically significant difference during that time interval. No statistically significant difference existed between the survey years regarding topics on women's general health and oral health. Regardless of statistical significance, further details investigated percentage differences that may reveal relevant issues. These surveys establish a baseline of women's health topics included in dental hygiene curricula in order to assess knowledge of dental hygienists in practice.

  3. Demand power with EV charging schemes considering actual data

    Directory of Open Access Journals (Sweden)

    Jun-Hyeok Kim

    2016-01-01

    Full Text Available Eco-friendly energies have recently become a popular topic. Given this trend, we predict that a large number of electric vehicles (EVs will be widely used. However, EVs need to be connected to a power system for charging, thereby causing severe risks, such as rapid increase of demand power. Therefore, in this study, we analyze the effects of EV charging on demand power, which depend on different charging schemes, namely, dumb charging, off-peak charging, time-of-use (ToU price-based charging. For practical analysis, we conduct simulations by considering the actual power system and driving patterns in South Korea. Simulation results show that the ToU price-based charging scheme exhibits better performance in terms of demand power over the other charging schemes.

  4. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu

    2013-07-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  5. MPI/OpenMP Hybrid Parallel Algorithm of Resolution of Identity Second-Order Møller-Plesset Perturbation Calculation for Massively Parallel Multicore Supercomputers.

    Science.gov (United States)

    Katouda, Michio; Nakajima, Takahito

    2013-12-10

    A new algorithm for massively parallel calculations of electron correlation energy of large molecules based on the resolution of identity second-order Møller-Plesset perturbation (RI-MP2) technique is developed and implemented into the quantum chemistry software NTChem. In this algorithm, a Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) hybrid parallel programming model is applied to attain efficient parallel performance on massively parallel supercomputers. An in-core storage scheme of intermediate data of three-center electron repulsion integrals utilizing the distributed memory is developed to eliminate input/output (I/O) overhead. The parallel performance of the algorithm is tested on massively parallel supercomputers such as the K computer (using up to 45 992 central processing unit (CPU) cores) and a commodity Intel Xeon cluster (using up to 8192 CPU cores). The parallel RI-MP2/cc-pVTZ calculation of two-layer nanographene sheets (C150H30)2 (number of atomic orbitals is 9640) is performed using 8991 node and 71 288 CPU cores of the K computer.

  6. Control of pain with topical plant medicines

    Institute of Scientific and Technical Information of China (English)

    James; David; Adams; Jr.; Xiaogang; Wang

    2015-01-01

    Pain is normally treated with oral nonsteroidal anti-inflammatory agents and opioids. These drugs are dangerous and are responsible for many hospitalizations and deaths. It is much safer to use topical preparations made from plants to treat pain, even severe pain. Topical preparations must contain compounds that penetrate the skin, inhibit pain receptors such as transient receptor potential cation channels and cyclooxygenase-2, to relieve pain. Inhibition of pain in the skin disrupts the pain cycle and avoids exposure of internal organs to large amounts of toxic compounds. Use of topical pain relievers has the potential to save many lives, decrease medical costs and improve therapy.

  7. Large scale topic modeling made practical

    DEFF Research Database (Denmark)

    Wahlgreen, Bjarne Ørum; Hansen, Lars Kai

    2011-01-01

    Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number of docume......Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number...... topics at par with a much larger case specific vocabulary....

  8. Digital Social Network Mining for Topic Discovery

    Science.gov (United States)

    Moradianzadeh, Pooya; Mohi, Maryam; Sadighi Moshkenani, Mohsen

    Networked computers are expanding more and more around the world, and digital social networks becoming of great importance for many people's work and leisure. This paper mainly focused on discovering the topic of exchanging information in digital social network. In brief, our method is to use a hierarchical dictionary of related topics and words that mapped to a graph. Then, with comparing the extracted keywords from the context of social network with graph nodes, probability of relation between context and desired topics will be computed. This model can be used in many applications such as advertising, viral marketing and high-risk group detection.

  9. Liquid crystals: a new topic in physics for undergraduates

    CERN Document Server

    Pavlin, Jerneja; Cepic, Mojca

    2012-01-01

    The paper presents a teaching module about liquid crystals. Since liquid crystals are linked to everyday student experiences and are also a topic of a current scientific research, they are an excellent candidate of a modern topic to be introduced into education. We show that liquid crystals can provide a file rouge through several fields of physics such as thermodynamics, optics and electromagnetism. We discuss what students should learn about liquid crystals and what physical concepts they should know before considering them. In the presentation of the teaching module that consists of a lecture and experimental work in a chemistry and physics lab, we focus on experiments on phase transitions, polarization of light, double refraction and colours. A pilot evaluation of the module was performed among pre-service primary school teachers who have no special preference for natural sciences. The evaluation shows that the module is very efficient in transferring knowledge. A prior study showed that the informally ob...

  10. Advance Care Planning: Medical Issues to Consider

    Science.gov (United States)

    ... condition is considered to be “end stage” when optimal medical care can no longer stabilize the medical ... may enable this person to recover, get needed sleep and rest, and resume functional capacity when off ...

  11. Considering the cultural context in psychopathology formulations

    African Journals Online (AJOL)

    2013-03-02

    Mar 2, 2013 ... principles, affect the manner in which people perceive and react. [3] Further, Reber and ... significance of considering cultural aspects in the understanding ..... interpersonal relationships within it, and the nature of being.

  12. Some Topics in Percolation and Gelation Processes.

    Science.gov (United States)

    Gonzalez-Flores, Agustin Eduardo

    The percolation problem has been studied extensively in the last years. One reason for this current interest is that it is a good model for a variety of physical phenomena, including the anomalous behavior of low temperature water and the gelation of polymers. In this dissertation we consider three main topics related to percolation problems:. (a) A Position Space Renormalization Group Study of the "Four-Coordinated" Correlated Percolation Model. Recently, a new site-correlated percolation problem was introduced in connection with the anomalous properties of low temperature water. Within a position-space renormalization group approach, this problem is shown to belong to the same universality class as random percolation. (b) An Extension of the Flory-Stockmayer Theory to a Binary Mixture of Polymers. The old theory of vulcanization of long polymer chains by Flory and Stockmayer is known to be equivalent to the percolation problem on Bethe lattices. We extend the theory to treat the case of a binary mixture of two polymers A and B with three different types of cross-links between them (A-A, B-B and A-B). By solving a bichromatic percolation problem on the Bethe lattice with three different bond probabilities, we were able to find the critical surface (gelation threshold), the gel fraction, and the weight-average molecular weight of the finite molecules. When we take the appropriate limit of a one-component case, we recover the old results by Flory and Stockmayer. (c) An Approximate Treatment of Polymer Gelation in a Solvent. We consider the gelation problem of long polymer chains immersed in a solvent, where the monomers composing the chains are capable of forming hydrogen bonds when they touch. Recent experimental results in these systems have shown that the gelation curves for the same polymer system with different solvents (different quality of the solvent) cross when plotted on the same temperature-concentration diagram. In this work we present an approximate

  13. MedlinePlus Health Topic Web Service

    Data.gov (United States)

    U.S. Department of Health & Human Services — A search-based Web service that provides access to disease, condition and wellness information via MedlinePlus health topic data in XML format. The service accepts...

  14. Sports Safety: MedlinePlus Health Topic

    Science.gov (United States)

    ... exercise injuries Returning to sports after a back injury Sports physical Related Health Topics Sports Fitness Sports Injuries Water Safety (Recreational) Languages Hmong (Hmoob) Russian (Русский) ...

  15. Helpful Contacts - Agency List By Topic

    Data.gov (United States)

    U.S. Department of Health & Human Services — A database of agencies listed by state and topic to provide you with contact information for specific organizations or help you get answers to your Medicare related...

  16. Topical cidofovir for refractory verrucae in children.

    Science.gov (United States)

    Gupta, Monique; Bayliss, Susan J; Berk, David R

    2013-01-01

    Warts are common and are a challenge to treat in some children, especially immunocompromised children and those who fail or cannot tolerate salicylic acid preparations and cryotherapy. Cidofovir, a nucleotide analogue with antiviral activity, has demonstrated promising results when compounded into a topical form to treat refractory warts. We present a retrospective institutional review of 12 children with refractory verrucae treated with 1% to 3% topical cidofovir compounded in an unscented moisturizing cream, applied every other day to daily. In our institutional series, only three patients (25%) demonstrated complete clearance of their verrucae. An additional four patients (33%) demonstrated partial clearance. Our experience using topical cidofovir has been less successful than previous institutional reviews, possibly because we used a lower concentration and less-frequent dosing. More studies are needed to better characterize the efficacy, safety, and dosing of topical cidofovir for the treatment of refractory warts.

  17. Diabetes: MedlinePlus Health Topic

    Science.gov (United States)

    ... Foundation for Medical Education and Research) Prevention and Risk Factors Choose More than 50 Ways to Prevent Type ... Resources for Living Well (National Diabetes Education Program) Diabetic Diet: MedlinePlus Health Topic (National Library of Medicine) ...

  18. Corneal Neurotoxicity Due to Topical Benzalkonium Chloride

    OpenAIRE

    Sarkar, Joy; Chaudhary, Shweta; Namavari, Abed; Ozturk, Okan; Chang, Jin-Hong; Yco, Lisette; Sonawane, Snehal; Khanolkar, Vishakha; Hallak, Joelle; Jain, Sandeep

    2012-01-01

    Topical application of benzalkonium chloride (BAK) to the eye causes dose-related corneal neurotoxicity. Corneal inflammation and reduction in aqueous tear production accompany neurotoxicity. Cessation of BAK treatment leads to recovery of corneal nerve density.

  19. Meeting Course Objectives by Integrating Topics.

    Science.gov (United States)

    Kjeseth, Steven A.

    1980-01-01

    Outlines two experiments by which several different mathematical topics can be integrated into the unifying theme of oscillating pendulums, as part of a secondary school Algebra II and Trigonometry class. (CS)

  20. Pulmonary Rehabilitation: MedlinePlus Health Topic

    Science.gov (United States)

    ... Related Issues Breathing and Relaxation (National Jewish Health) DANCE Your Way to Healthier Lungs (American Association for Respiratory Care) Exercises (National Jewish Health) Oxygen Therapy: MedlinePlus Health Topic (National Library of Medicine) Also ...

  1. Clinical Trials.Gov: A Topical Analyses.

    Science.gov (United States)

    Anand, Vibha; Cahan, Amos; Ghosh, Soumya

    2017-01-01

    ClinicalTrials.gov was established as a web-based registry for clinical trials of human participants in 2000. Mandatory registration started in 2008. Given more than a decade of registered trials, it's important to understand the "topic" areas and their evolution over time from this resource. This information may help in identifying current knowledge gaps. We use dynamic topic model (DTM) methods to discover topics and their evolution over last 17 years. Our model suggests that there are disease or organ specific trials such as 'Cardiovascular disorders', Heart & Brain conditions', or 'Breast & Prostate cancer' as well as trials registered for general health. General health trials are less likely to be FDA regulated, but both health and pain management, as well as surgical, heart, and brain trials have upward trend in recent years while advanced cancer trials have downward trended. Our model derives unique insights from metadata associated with each topic area.

  2. Pain Relievers: MedlinePlus Health Topic

    Science.gov (United States)

    ... Physicians) Also in Spanish Medicines for Pain: From Osteoarthritis to Muscle Pain (National Center for Farmworker Health, ... Institutes of Health Page last updated on 16 March 2017 Topic last reviewed: 29 December 2016

  3. Dexmedetomidine premedication in cataract surgery under topical ...

    African Journals Online (AJOL)

    Keywords: cataract surgery, dexmedetomidine, intraocular pressure, patient and surgeon satisfaction, topical ... heart rate (HR), mean arterial pressure (MAP), respiratory rate (RR), .... Despite an apparently normal etCO2 on monitor, any.

  4. Environmental Health Topics from A to Z

    Science.gov (United States)

    ... Topics Environmental Agents Acrylamide Air Pollution Allergens & Irritants Aloe Vera Arsenic Bisphenol A (BPA) Cell Phones Climate ... Cigarette Smoke Cockroaches Dust Mites Pets & Animals Pollen Aloe Vera Alternatives to Animal Testing Arsenic Asthma Autism ...

  5. Mining Concurrent Topical Activity in Microblog Streams

    CERN Document Server

    Panisson, A; Quaggiotto, M; Cattuto, C

    2014-01-01

    Streams of user-generated content in social media exhibit patterns of collective attention across diverse topics, with temporal structures determined both by exogenous factors and endogenous factors. Teasing apart different topics and resolving their individual, concurrent, activity timelines is a key challenge in extracting knowledge from microblog streams. Facing this challenge requires the use of methods that expose latent signals by using term correlations across posts and over time. Here we focus on content posted to Twitter during the London 2012 Olympics, for which a detailed schedule of events is independently available and can be used for reference. We mine the temporal structure of topical activity by using two methods based on non-negative matrix factorization. We show that for events in the Olympics schedule that can be semantically matched to Twitter topics, the extracted Twitter activity timeline closely matches the known timeline from the schedule. Our results show that, given appropriate techn...

  6. Cardiovascular Complications Resulting from Topical Lidocaine Application

    Directory of Open Access Journals (Sweden)

    Feng Lin

    2008-12-01

    Full Text Available Topical lidocaine is one of the most commonly used anesthetics in the emergency department (ED. The advantages of topical anesthesia include ease of application, minimal discomfort on administration, and rapid onset of anesthesia. Systemic toxic effects after topical lidocaine application are rare. We present a case of a man aged 48 years with no history of heart disease and no evidence of bradycardia in previous electrocardiograms. The patient had sprayed lidocaine solution on the glans of his penis before sex during the 2 weeks prior to admission. Cardiovascular adverse events occurred, including chest tightness and bradycardia. After 2 hours conservative treatment at our ED, his symptoms were alleviated. He was discharged from the ED without any medication. The case suggests that detailed patient histories are necessary for accurate diagnosis and that rapid diagnosis and implementation of treatment is necessary for successful patient outcomes in cases of cardiovascular complications resulting from topical lidocaine application.

  7. Section 608 Technician Certification Test Topics

    Science.gov (United States)

    Identifies some of the topics covered on Section 608 Technician Certification tests such as ozone depletion, the Clean Air Act and Montreal Protocol, Substitute Refrigerants and oils, Refrigeration and Recovery Techniques.

  8. Pharmacogenetics of ophthalmic topical β-blockers

    OpenAIRE

    Sidjanin, Duska J.; McCarty, Catherine A.; Patchett, Richard; Smith, Edward; Wilke, Russell A.

    2008-01-01

    Glaucoma is the second leading cause of blindness worldwide. The primary glaucoma risk factor is elevated intraocular pressure. Topical β-blockers are affordable and widely used to lower intraocular pressure. Genetic variability has been postulated to contribute to interpersonal differences in efficacy and safety of topical β-blockers. This review summarizes clinically significant polymorphisms that have been identified in the β-adrenergic receptors (ADRB1, ADRB2 and ADRB3). The implications ...

  9. Treatment of pediculosis capitis with topical albendazole.

    Science.gov (United States)

    Ayoub, Nakhlé; Maatouk, Ismaël; Merhy, Martin; Tomb, Roland

    2012-02-01

    Pediculosis capitis, or head lice infestation, caused by Pediculus humanus capitis, is a common and ubiquitous health concern. Increasing resistance and treatment failures are reported with available topical pediculicides and may prove challenging to manage. Recent data indicate that the oral anti-helmintic agents thiabendazole and albendazole could represent new therapeutic options against pediculosis capitis. We report a novel treatment modality in four patients with head lice who were successfully treated with a topical application of albendazole.

  10. Topical steroid addiction in atopic dermatitis

    Directory of Open Access Journals (Sweden)

    Fukaya M

    2014-10-01

    Full Text Available Mototsugu Fukaya,1 Kenji Sato,2 Mitsuko Sato,3 Hajime Kimata,4 Shigeki Fujisawa,5 Haruhiko Dozono,6 Jun Yoshizawa,7 Satoko Minaguchi8 1Tsurumai Kouen Clinic, Nagoya, 2Department of Dermatology, Hannan Chuo Hospital, Osaka, 3Sato Pediatric Clinic, Osaka, 4Kimata Hajime Clinic, Osaka, 5Fujisawa Dermatology Clinic, Tokyo, 6Dozono Medical House, Kagoshima, 7Yoshizawa Dermatology Clinic, Yokohama, 8Department of Dermatology, Kounosu Kyousei Hospital, Saitama, Japan Abstract: The American Academy of Dermatology published a new guideline regarding topical therapy in atopic dermatitis in May 2014. Although topical steroid addiction or red burning skin syndrome had been mentioned as possible side effects of topical steroids in a 2006 review article in the Journal of the American Academy of Dermatology, no statement was made regarding this illness in the new guidelines. This suggests that there are still controversies regarding this illness. Here, we describe the clinical features of topical steroid addiction or red burning skin syndrome, based on the treatment of many cases of the illness. Because there have been few articles in the medical literature regarding this illness, the description in this article will be of some benefit to better understand the illness and to spur discussion regarding topical steroid addiction or red burning skin syndrome. Keywords: topical steroid addiction, atopic dermatitis, red burning skin syndrome, rebound, corticosteroid, eczema

  11. The fictionality of topic modeling: Machine reading Anthony Trollope's Barsetshire series

    Directory of Open Access Journals (Sweden)

    Rachel Sagner Buurma

    2015-12-01

    Full Text Available This essay describes how using unsupervised topic modeling (specifically the latent Dirichlet allocation topic modeling algorithm in MALLET on relatively small corpuses can help scholars of literature circumvent the limitations of some existing theories of the novel. Using an example drawn from work on Victorian novelist Anthony Trollope's Barsetshire series, it argues that unsupervised topic modeling's counter-factual and retrospective reconstruction of the topics out of which a given set of novels have been created allows for a denaturalizing and unfamiliar (though crucially not “objective” or “unbiased” view. In other words, topic models are fictions, and scholars of literature should consider reading them as such. Drawing on one aspect of Stephen Ramsay's idea of algorithmic criticism, the essay emphasizes the continuities between “big data” methods and techniques and longer-standing methods of literary study.

  12. Treatment of angiofibromas in tuberous sclerosis complex: the effect of topical rapamycin and concomitant laser therapy.

    Science.gov (United States)

    Park, Jin; Yun, Seok-Kweon; Cho, Yong-Sun; Song, Ki-Hun; Kim, Han-Uk

    2014-01-01

    Facial angiofibromas are the most troublesome cutaneous manifestations of the tuberous sclerosis complex and are difficult to treat. Lasers are most commonly used to treat these skin lesions, but results are disappointing with frequent recurrences. Recently, treatment of facial angiofibromas with topical rapamycin has been reported to yield promising results. We observed the need of laser ablation in addition to topical rapamycin to get best results for the treatment of angiofibromas in 4 cases. The result showed that topical rapamycin ointment was enough when the papules were yet small in size, i.e. less than a few millimeters, but additional laser ablation was needed for large papules approximately larger than 4 mm. Considering the natural course of facial angiofibromas, we believe that topical rapamycin can be best used in childhood patients. In adults, topical rapamycin was useful for treating the still present small papules and for preventing recurrences after laser treatment.

  13. Dressings and topical agents for preventing pressure ulcers.

    Science.gov (United States)

    Moore, Zena E H; Webster, Joan

    2013-08-18

    Pressure ulcers, which are localised injury to the skin, or underlying tissue or both, occur when people are unable to reposition themselves to relieve pressure on bony prominences. Pressure ulcers are often difficult to heal, painful and impact negatively on the individual's quality of life. The cost implications of pressure ulcer treatment are considerable, compounding the challenges in providing cost effective, efficient health services. Efforts to prevent the development of pressure ulcers have focused on nutritional support, pressure redistributing devices, turning regimes and the application of various topical agents and dressings designed to maintain healthy skin, relieve pressure and prevent shearing forces. Although products aimed at preventing pressure ulcers are widely used, it remains unclear which, if any, of these approaches are effective in preventing the development of pressure ulcers. To evaluate the effects of dressings and topical agents on the prevention of pressure ulcers, in people of any age without existing pressure ulcers, but considered to be at risk of developing a pressure ulcer, in any healthcare setting. In February 2013 we searched the following electronic databases to identify reports of relevant randomised clinical trials (RCTs): the Cochrane Wounds Group Specialised Register; the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library); Database of Abstracts of Reviews of Effects (The Cochrane Library); Ovid MEDLINE; Ovid MEDLINE (In-Process & Other Non-Indexed Citations); Ovid EMBASE; and EBSCO CINAHL. We included RCTs evaluating the use of dressings, topical agents, or topical agents with dressings, compared with a different dressing, topical agent, or combined topical agent and dressing, or no intervention or standard care, with the aim of preventing the development of a pressure ulcer. We assessed trials for their appropriateness for inclusion and for their risk of bias. This was done by two review

  14. Offshore Structural Control Considering Fluid Structure Interaction

    Institute of Scientific and Technical Information of China (English)

    Ju Myung KIM; Dong Hyawn KIM; Gyu Won LEE

    2006-01-01

    Tuned Mass Damper (TMD) was applied to an offshore structure to control ocean wave-induced vibration. In the analysis of the dynamic response of the offshore structure, fluid-structure interaction is considered and the errors, which occur in the linearization of the interaction, are investigated. For the investigation of the performance of TMD in controlling the vibration, both regular waves with different periods and irregular waves with different significant wave heights are used. Based on the numerical analysis it is concluded that the fluid-structure interaction should be considered in the evaluation of the capability of TMD in vibration control of offshore structures.

  15. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    Science.gov (United States)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  16. Biosimilars: what do patients need to consider?

    Science.gov (United States)

    Skingle, Diana

    2015-01-01

    A view from the EULAR Standing Committee of People with Arthritis/Rheumatism in Europe (SCPARE) on some of the issues that patients might wish to consider about biosimilars in shared decision-making discussions with their rheumatologist. The paper also points to the need for more information on biosimilars being made available in lay language. PMID:26535149

  17. Selective Maintenance Model Considering Time Uncertainty

    OpenAIRE

    Le Chen; Zhengping Shu; Yuan Li; Xuezhi Lv

    2012-01-01

    This study proposes a selective maintenance model for weapon system during mission interval. First, it gives relevant definitions and operational process of material support system. Then, it introduces current research on selective maintenance modeling. Finally, it establishes numerical model for selecting corrective and preventive maintenance tasks, considering time uncertainty brought by unpredictability of maintenance procedure, indetermination of downtime for spares and difference of skil...

  18. Access, Consider, Teach: ACT in Your Classroom

    Science.gov (United States)

    Stanford, Pokey; Reeves, Stacy

    2007-01-01

    University teachers who are teacher educators cannot connect to "The Millennial Generation" of today's preservice learners by using chalk and dull outdated textbooks. When university professionals access the technology available, consider the curriculum, and teach with technology (ACT) undergraduate teacher candidates acquire the vision of…

  19. Tempo curves considered harmful (part 1)

    NARCIS (Netherlands)

    Desain, P.; Honing, H.

    1991-01-01

    A column (the first of a series of three) constitutes an abridged and adapted version of Tempo curves considered harmful . Two friends, an amateur mathematician (M) and a would-be psychologist (P), invited a retired pianist to do some experiments with their new sequencer program. As musical material

  20. Tempo curves considered harmful (part 2)

    NARCIS (Netherlands)

    Desain, P.; Honing, H.

    1991-01-01

    A column (the second of a series of three) constitutes an abridged and adapted version of Tempo curves considered harmful . M (an amateur mathematician) and P (a would-be psychologist) incorporated some generative models for expressive timing in their sequencer program. This proved partially succesf

  1. ROLE OF THE TOPIC BACTERIAL LYSATES IN THE PEDIATRIC PRACTICES

    Directory of Open Access Journals (Sweden)

    L.R. Selimzyanova

    2009-01-01

    Full Text Available The recurrent and chronic infections are an actual problem in pediatrics. The regular use of antibiotics leads to the resistance of the oral cavity pathogens and disturbances in the normal microflora balance. Bacterial lysates are considered to be the efficient means to prevent and treat the acute and chronic respiratory diseases. The article is an executive summary of literature on the bacterial immune response modulating agents, including the description of the mechanism of action of these medications. The article provides for the information on the peculiarities of the topic bacterial lysates.Key words: children, acute respiratory infections, chronic pharyngitis, chronic tonsillitis, stomatitis, bacterial immune response modulating agent.

  2. 2011 annual meeting on nuclear technology. Topical sessions. Pt. 5; Jahrestagung Kerntechnik 2011. Fachsitzungsberichte. T. 5

    Energy Technology Data Exchange (ETDEWEB)

    Fazio, Concetta [Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen (Germany). Nuclear Safety Research Programme

    2011-12-15

    Summary report on the Topical Session of the Annual Conference on Nuclear Technology held in Berlin, 17 to 19 May 2011: - Sodium Cooled Fast Reactors. The reports on the Topical Sessions: - CFD-Simulations for Safety Relevant Tasks, - Final Disposal: From Scientific Basis to Application, - Characteristics of a High Reliability Organization (HRO) Considering Experience Gained from Events at Nuclear Power Stations, and - Nuclear Competence in Germany and Europe have been covered in atw 7, 8/9, 10 and 11 (2011). (orig.)

  3. Ring keratitis due to topical anaesthetic abuse in a contact lens wearer.

    Science.gov (United States)

    Kurna, Sevda Aydin; Sengor, Tomris; Aki, Suat; Agirman, Yasemin

    2012-07-01

    A 38-year-old woman wearing hydrogel coloured contact lenses presented to the clinic with a painful red eye and epiphora. On biomicroscopy, a large corneal epithelial defect and ring infiltrate were observed. She had been using topical anaesthetic drops for 10 days. After cessation of the anaesthetic drops, the corneal lesions resolved completely in two weeks. On evaluation of a contact lens user with atypical keratitis, misuse of topical anaesthetics should also be considered.

  4. 2011 annual meeting on nuclear technology. Pt. 4. Topical sessions; Jahrestagung Kerntechnik 2011. T. 4. Fachsitzungsberichte

    Energy Technology Data Exchange (ETDEWEB)

    Schoenfelder, Christian; Dams, Wolfgang [AREVA NP GmbH, Offenbach (Germany)

    2011-11-15

    Summary report on the Topical Session of the Annual Conference on Nuclear Technology held in Berlin, 17 to 19 May 2011: - Nuclear Competence in Germany and Europe. The Topical Session: - Sodium Cooled Fast Reactors -- will be covered in a report in a further issue of atw. The reports on the Topical Sessions: - CFD-Simulations for Safety Relevant Tasks; and - Final Disposal: From Scientific Basis to Application; - Characteristics of a High Reliability Organization (HRO) Considering Experience Gained from Events at Nuclear Power Stations -- have been covered in atw 7, 8/9, and 10 (2011). (orig.)

  5. Should Introductory Comparative ­Philosophy Courses Be Structured Around Topics or Traditions?

    Directory of Open Access Journals (Sweden)

    Jeremy Henkel

    2016-07-01

    Full Text Available Should introductory courses in comparative philosophy be organized around traditions or around topics? Will students be better served by considering Indian, Chinese, African, and Native American philosophies in depth and in sequence, or by exploring differing philosophical approaches to such topics as beauty, moral responsibility, and human nature? Each approach has reasons that recommend it, but each also brings with it serious limitations. In this essay I rehearse what I take to be the most salient arguments both for and against each approach. In the end, I conclude that, for introductory courses in comparative philosophy, an approach organized around traditions is preferable to one organized around topics.

  6. Systemic vs. Topical Therapy for the Treatment of Vulvovaginal Candidiasis

    Directory of Open Access Journals (Sweden)

    Sebastian Faro

    1994-01-01

    Full Text Available It is estimated that 75% of all women will experience at least 1 episode of vulvovaginal candidiasis (VVC during their lifetimes. Most patients with acute VVC can be treated with short-term regimens that optimize compliance. Since current topical and oral antifungals have shown comparably high efficacy rates, other issues should be considered in determining the most appropriate therapy. It is possible that the use of short-duration narrow-spectrum agents may increase selection of more resistant organisms which will result in an increase of recurrent VVC (RVVC. Women who are known or suspected to be pregnant and women of childbearing age who are not using a reliable means of contraception should receive topical therapy, as should those who are breast-feeding or receiving drugs that can interact with an oral azole and those who have previously experienced adverse effects during azole therapy. Because of the potential risks associated with systemic treatment, topical therapy with a broad-spectrum agent should be the method of choice for VVC, whereas systemic therapy should be reserved for either RVVC or cases where the benefits outweigh any possible adverse reactions.

  7. TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections.

    Science.gov (United States)

    Kim, Minjeong; Kang, Kyeongpil; Park, Deokgun; Choo, Jaegul; Elmqvist, Niklas

    2017-01-01

    Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets.

  8. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    Directory of Open Access Journals (Sweden)

    Sung-Chien Lin

    2014-07-01

    Full Text Available In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results of this study show that when the number of topics was set to 10, the topic model has the smallest perplexity. Although data scopes and analysis methodsare different to previous studies, the generating topics of this study are consistent with those results produced by analyses of experts. Empirical case studies and measurements of bibliometric indicators were concerned important in every year during the whole analytic period, and the field was increasing stability. Both the two core journals broadly paid more attention to all of the topics in the field of Informetrics. The Journal of Informetricsput particular emphasis on construction and applications ofbibliometric indicators and Scientometrics focused on the evaluation and the factors of productivity of countries, institutions, domains, and journals.

  9. Topics in linear optical quantum computation

    Science.gov (United States)

    Glancy, Scott Charles

    This thesis covers several topics in optical quantum computation. A quantum computer is a computational device which is able to manipulate information by performing unitary operations on some physical system whose state can be described as a vector (or mixture of vectors) in a Hilbert space. The basic unit of information, called the qubit, is considered to be a system with two orthogonal states, which are assigned logical values of 0 and 1. Photons make excellent candidates to serve as qubits. They have little interactions with the environment. Many operations can be performed using very simple linear optical devices such as beam splitters and phase shifters. Photons can easily be processed through circuit-like networks. Operations can be performed in very short times. Photons are ideally suited for the long-distance communication of quantum information. The great difficulty in constructing an optical quantum computer is that photons naturally interact weakly with one another. This thesis first gives a brief review of two early approaches to optical quantum computation. It will describe how any discrete unitary operation can be performed using a single photon and a network of beam splitters, and how the Kerr effect can be used to construct a two photon logic gate. Second, this work provides a thorough introduction to the linear optical quantum computer developed by Knill, Laflamme, and Milburn. It then presents this author's results on the reliability of this scheme when implemented using imperfect photon detectors. This author finds that quantum computers of this sort cannot be built using current technology. Third, this dissertation describes a method for constructing a linear optical quantum computer using nearly orthogonal coherent states of light as the qubits. It shows how a universal set of logic operations can be performed, including calculations of the fidelity with which these operations may be accomplished. It discusses methods for reducing and

  10. Diabetes topics associated with engagement on Twitter.

    Science.gov (United States)

    Harris, Jenine K; Mart, Adelina; Moreland-Russell, Sarah; Caburnay, Charlene A

    2015-05-07

    Social media are widely used by the general public and by public health and health care professionals. Emerging evidence suggests engagement with public health information on social media may influence health behavior. However, the volume of data accumulating daily on Twitter and other social media is a challenge for researchers with limited resources to further examine how social media influence health. To address this challenge, we used crowdsourcing to facilitate the examination of topics associated with engagement with diabetes information on Twitter. We took a random sample of 100 tweets that included the hashtag "#diabetes" from each day during a constructed week in May and June 2014. Crowdsourcing through Amazon's Mechanical Turk platform was used to classify tweets into 9 topic categories and their senders into 3 Twitter user categories. Descriptive statistics and Tweedie regression were used to identify tweet and Twitter user characteristics associated with 2 measures of engagement, "favoriting" and "retweeting." Classification was reliable for tweet topics and Twitter user type. The most common tweet topics were medical and nonmedical resources for diabetes. Tweets that included information about diabetes-related health problems were positively and significantly associated with engagement. Tweets about diabetes prevalence, nonmedical resources for diabetes, and jokes or sarcasm about diabetes were significantly negatively associated with engagement. Crowdsourcing is a reliable, quick, and economical option for classifying tweets. Public health practitioners aiming to engage constituents around diabetes may want to focus on topics positively associated with engagement.

  11. Topical treatment options for conjunctival neoplasms

    Directory of Open Access Journals (Sweden)

    Jonathan W Kim

    2008-10-01

    Full Text Available Jonathan W Kim, David H AbramsonOphthalmic Oncology Service, Memorial Sloan-Kettering Cancer Center, New York, NY, USAAbstract: Topical therapies offer a nonsurgical method for treating conjunctival tumors by delivering high drug concentrations to the ocular surface. Over the past ten years, topical agents have been used by investigators to treat various premalignant and malignant lesions of the conjunctiva, such as primary acquired melanosis with atypia, conjunctival melanoma, squamous intraepithelial neoplasia and squamous cell carcinoma of the conjunctiva, and pagetoid spread of the conjunctiva arising from sebaceous cell carcinoma. Despite the enthusiasm generated by the success of these agents, there are unanswered questions regarding the clinical efficacy of this new nonsurgical approach, and whether a single topical agent can achieve cure rates comparable with traditional therapies. Furthermore, the long-term consequences of prolonged courses of topical chemotherapeutic drugs on the ocular surface are unknown, and the ideal regimen for each of these agents is still being refined. In this review, we present specific guidelines for treating both melanocytic and squamous neoplasms of the conjunctiva, utilizing the available data in the literature as well as our own clinical experience at the Memorial Sloan-Kettering Cancer Center.Keywords: topical therapies, conjunctival neoplasms melanosis, Mitomycin-C, 5-Fluorouracil

  12. Polymethacrylate microparticles gel for topical drug delivery.

    Science.gov (United States)

    Labouta, Hagar Ibrahim; El-Khordagui, Labiba K

    2010-10-01

    Evaluating the potentials of particulate delivery systems in topical drug delivery. Polymethacrylate microparticles (MPs) incorporating verapamil hydrochloride (VRP) as a model hydrophilic drug with potential topical clinical uses, using Eudragit RS100 and Eudragit L100 were prepared for the formulation of a composite topical gel. The effect of initial drug loading, polymer composition, particularly the proportion of Eudragit L100 as an interacting polymer component and the HLB of the dispersing agent on MPs characteristics was investigated. A test MPs formulation was incorporated in gel and evaluated for drug release and human skin permeation. MPs showed high % incorporation efficiency and % yield. Composition of the hybrid polymer matrix was a main determinant of MPs characteristics, particularly drug release. Factors known to influence drug release such as MPs size and high drug solubility were outweighed by strong VRP-Eudragit L100 interaction. The developed MPs gel showed controlled VRP release and reduced skin retention compared to a free drug gel. Topical drug delivery and skin retention could be modulated using particulate delivery systems. From a practical standpoint, the VRP gel developed may offer advantage in a range of dermatological conditions, in response to the growing off-label topical use of VRP.

  13. Delphi survey to identify topics to be addressed at the initial follow-up consultation after oesophageal cancer surgery.

    Science.gov (United States)

    Jacobs, M; Henselmans, I; Macefield, R C; Blencowe, N S; Smets, E M A; de Haes, J C J M; Sprangers, M A G; Blazeby, J M; van Berge Henegouwen, M I

    2014-12-01

    There is no consensus among patients and healthcare professionals (HCPs) on the topics that need to be addressed after oesophageal cancer surgery. The aim of this study was to identify these topics, using a two-round Delphi survey. In round 1, patients and HCPs (surgeons, dieticians, nurses) were invited to rate the importance of 49 topics. The proportion of panellists that considered a topic to be of low, moderate or high importance was then calculated for each of these two groups. Based on these proportions and the i.q.r., topics were categorized as: 'consensus to be included', 'consensus to be excluded' and 'no consensus'. Only topics in the first category were included in the second round. In round 2, panellists were provided with individual and group feedback. To be included in the final list, topics had to meet criteria for consensus and stability. There were 108 patients and 77 HCPs in the round 2 analyses. In general, patients and HCPs considered the same topics important. The final list included 23 topics and revealed that it was most important to address: cancer removed/lymph nodes, the new oesophagus, eating and drinking, surgery, alarming new complaints and the recovery period. The study provides surgeons with a list of topics selected by patients and HCPs that may be addressed systematically at the initial follow-up consultation after oesophageal cancer surgery. © 2014 BJS Society Ltd. Published by John Wiley & Sons Ltd.

  14. Considering the Problem of Insider IT Misuse

    OpenAIRE

    Steven Furnell; Aung Htike Phyo

    2003-01-01

    In recent years the Internet connection has become a frequent point of attack for most organisations. However, the loss due to insider misuse is far greater than the loss due to external abuse. This paper focuses on the problem of insider misuse, the scale of it, and how it has effected the organisations. The paper also discusses why access controls alone cannot be used to address the problem, and proceeds to consider how techniques currently associated with Intrusion Detection Systems can po...

  15. Considering Air Density in Wind Power Production

    CERN Document Server

    Farkas, Zénó

    2011-01-01

    In the wind power production calculations the air density is usually considered as constant in time. Using the CIPM-2007 equation for the density of moist air as a function of air temperature, air pressure and relative humidity, we show that it is worth taking the variation of the air density into account, because higher accuracy can be obtained in the calculation of the power production for little effort.

  16. Sandia`s network for Supercomputing `94: Linking the Los Alamos, Lawrence Livermore, and Sandia National Laboratories using switched multimegabit data service

    Energy Technology Data Exchange (ETDEWEB)

    Vahle, M.O.; Gossage, S.A.; Brenkosh, J.P. [Sandia National Labs., Albuquerque, NM (United States). Advanced Networking Integration Dept.

    1995-01-01

    Supercomputing `94, a high-performance computing and communications conference, was held November 14th through 18th, 1994 in Washington DC. For the past four years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1994 conference, Sandia built a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second linking its private SMDS network between its facilities in Albuquerque, New Mexico and Livermore, California to the convention center in Washington, D.C. For the show, the network was also extended from Sandia, New Mexico to Los Alamos National Laboratory and from Sandia, California to Lawrence Livermore National Laboratory. This paper documents and describes this network and how it was used at the conference.

  17. A Reliability Calculation Method for Web Service Composition Using Fuzzy Reasoning Colored Petri Nets and Its Application on Supercomputing Cloud Platform

    Directory of Open Access Journals (Sweden)

    Ziyun Deng

    2016-09-01

    Full Text Available In order to develop a Supercomputing Cloud Platform (SCP prototype system using Service-Oriented Architecture (SOA and Petri nets, we researched some technologies for Web service composition. Specifically, in this paper, we propose a reliability calculation method for Web service compositions, which uses Fuzzy Reasoning Colored Petri Net (FRCPN to verify the Web service compositions. We put forward a definition of semantic threshold similarity for Web services and a formal definition of FRCPN. We analyzed five kinds of production rules in FRCPN, and applied our method to the SCP prototype. We obtained the reliability value of the end Web service as an indicator of the overall reliability of the FRCPN. The method can test the activity of FRCPN. Experimental results show that the reliability of the Web service composition has a correlation with the number of Web services and the range of reliability transition values.

  18. Enabling Loosely-Coupled Serial Job Execution on the IBM BlueGene/P Supercomputer and the SiCortex SC5832

    CERN Document Server

    Raicu, Ioan; Wilde, Mike; Foster, Ian

    2008-01-01

    Our work addresses the enabling of the execution of highly parallel computations composed of loosely coupled serial jobs with no modifications to the respective applications, on large-scale systems. This approach allows new-and potentially far larger-classes of application to leverage systems such as the IBM Blue Gene/P supercomputer and similar emerging petascale architectures. We present here the challenges of I/O performance encountered in making this model practical, and show results using both micro-benchmarks and real applications on two large-scale systems, the BG/P and the SiCortex SC5832. Our preliminary benchmarks show that we can scale to 4096 processors on the Blue Gene/P and 5832 processors on the SiCortex with high efficiency, and can achieve thousands of tasks/sec sustained execution rates for parallel workloads of ordinary serial applications. We measured applications from two domains, economic energy modeling and molecular dynamics.

  19. Total Site Heat Integration Considering Pressure Drops

    Directory of Open Access Journals (Sweden)

    Kew Hong Chew

    2015-02-01

    Full Text Available Pressure drop is an important consideration in Total Site Heat Integration (TSHI. This is due to the typically large distances between the different plants and the flow across plant elevations and equipment, including heat exchangers. Failure to consider pressure drop during utility targeting and heat exchanger network (HEN synthesis may, at best, lead to optimistic energy targets, and at worst, an inoperable system if the pumps or compressors cannot overcome the actual pressure drop. Most studies have addressed the pressure drop factor in terms of pumping cost, forbidden matches or allowable pressure drop constraints in the optimisation of HEN. This study looks at the implication of pressure drop in the context of a Total Site. The graphical Pinch-based TSHI methodology is extended to consider the pressure drop factor during the minimum energy requirement (MER targeting stage. The improved methodology provides a more realistic estimation of the MER targets and valuable insights for the implementation of the TSHI design. In the case study, when pressure drop in the steam distribution networks is considered, the heating and cooling duties increase by 14.5% and 4.5%.

  20. GVD compensation schemes with considering PMD

    Institute of Scientific and Technical Information of China (English)

    Aiying Yang(杨爱英); Anshi Xu(徐安士); Deming Wu(吴德明)

    2003-01-01

    Three group velocity dispersion (GVD) compensation schemes, i.e., the post-compensation, pre-compensation and hybrid-compensation schemes, are discussed with considering polarization mode disper-sion (PMD). In the 10- and 40-Gbit/s non-return-zero (NRZ) on-off-key (OOK) systems, three physicalfactors, Kerr effect, GVD and PMD are considered. The numerical results show that, when the impactof PMD is taken into account, the GVD pre-compensation scheme performs best with more than 1 dBbetter of average eye-opening penalty (EOP) when input power is up to 10 dBm in the 10-Gbit/s system.However the GVD post-compensation scheme perforns best for the case of 40 Gbit/s with input power lessthan 13 dBm, and GVD pre-compensation will be better if the input power increased beyond this range.The results are different from those already reported under the assumption that the impact of PMD isneglected. Therefore, the research in this paper provide a different insight into the system optimizationwhen PMD, Kerr effect and GVD are considered.

  1. Risk assessment of topically applied products

    DEFF Research Database (Denmark)

    Søborg, Tue; Basse, Line Hollesen; Halling-Sørensen, Bent

    2007-01-01

    The human risk of harmful substances in semisolid topical dosage forms applied topically to normal skin and broken skin, respectively, was assessed. Bisphenol A diglycidyl ether (BADGE) and three derivatives of BADGE previously quantified in aqueous cream and the UV filters 3-BC and 4-MBC were used...... as model compounds. Tolerable daily intake (TDI) values have been established for BADGE and derivatives. Endocrine disruption was chosen as endpoint for 3-BC and 4-MBC. Skin permeation of the model compounds was investigated in vitro using pig skin membranes. Tape stripping was applied to simulate broken...... parameters for estimating the risk. The immediate human risk of BADGE and derivatives in topical dosage forms was found to be low. However, local treatment of broken skin may lead to higher exposure of BADGE and derivatives compared to application to normal skin. 3-BC permeated skin at higher flux than 4-MBC...

  2. Getting Started with Topic Modeling and MALLET

    Directory of Open Access Journals (Sweden)

    Shawn Graham

    2012-09-01

    Full Text Available In this lesson you will first learn what topic modeling is and why you might want to employ it in your research. You will then learn how to install and work with the MALLET natural language processing toolkit to do so. MALLET involves modifying an environment variable (essentially, setting up a short-cut so that your computer always knows where to find the MALLET program and working with the command line (ie, by typing in commands manually, rather than clicking on icons or menus. We will run the topic modeller on some example files, and look at the kinds of outputs that MALLET installed. This will give us a good idea of how it can be used on a corpus of texts to identify topics found in the documents without reading them individually.

  3. Lecture Notes on Topics in Accelerator Physics

    Energy Technology Data Exchange (ETDEWEB)

    Chao, Alex W.

    2002-11-15

    These are lecture notes that cover a selection of topics, some of them under current research, in accelerator physics. I try to derive the results from first principles, although the students are assumed to have an introductory knowledge of the basics. The topics covered are: (1) Panofsky-Wenzel and Planar Wake Theorems; (2) Echo Effect; (3) Crystalline Beam; (4) Fast Ion Instability; (5) Lawson-Woodward Theorem and Laser Acceleration in Free Space; (6) Spin Dynamics and Siberian Snakes; (7) Symplectic Approximation of Maps; (8) Truncated Power Series Algebra; and (9) Lie Algebra Technique for nonlinear Dynamics. The purpose of these lectures is not to elaborate, but to prepare the students so that they can do their own research. Each topic can be read independently of the others.

  4. Topical minoxidil: cardiac effects in bald man.

    Science.gov (United States)

    Leenen, F H; Smith, D L; Unger, W P

    1988-01-01

    Systemic cardiovascular effects during chronic treatment with topical minoxidil vs placebo were evaluated using a double-blind, randomized design for two parallel groups (n = 20 for minoxidil, n = 15 for placebo). During 6 months of follow-up, blood pressure did not change, whereas minoxidil increased heart rate by 3-5 beats min-1. Compared with placebo, topical minoxidil caused significant increases in LV end-diastolic volume, in cardiac output (by 0.751 min-1) and in LV mass (by 5 g m-2). We conclude that in healthy subjects short-term use of topical minoxidil is likely not to be detrimental. However, safety needs to be established regarding ischaemic symptoms in patients with coronary artery disease as well as for the possible development of LV hypertrophy in healthy subjects during years of therapy. PMID:3191000

  5. Automated Word Puzzle Generation via Topic Dictionaries

    CERN Document Server

    Pinter, Balazs; Szabo, Zoltan; Lorincz, Andras

    2012-01-01

    We propose a general method for automated word puzzle generation. Contrary to previous approaches in this novel field, the presented method does not rely on highly structured datasets obtained with serious human annotation effort: it only needs an unstructured and unannotated corpus (i.e., document collection) as input. The method builds upon two additional pillars: (i) a topic model, which induces a topic dictionary from the input corpus (examples include e.g., latent semantic analysis, group-structured dictionaries or latent Dirichlet allocation), and (ii) a semantic similarity measure of word pairs. Our method can (i) generate automatically a large number of proper word puzzles of different types, including the odd one out, choose the related word and separate the topics puzzle. (ii) It can easily create domain-specific puzzles by replacing the corpus component. (iii) It is also capable of automatically generating puzzles with parameterizable levels of difficulty suitable for, e.g., beginners or intermedia...

  6. Lecture Notes on Topics in Accelerator Physics

    CERN Document Server

    Chao, A W

    2002-01-01

    These are lecture notes that cover a selection of topics, some of them under current research, in accelerator physics. I try to derive the results from first principles, although the students are assumed to have an introductory knowledge of the basics. The topics covered are: (1) Panofsky-Wenzel and Planar Wake Theorems; (2) Echo Effect; (3) Crystalline Beam; (4) Fast Ion Instability; (5) Lawson-Woodward Theorem and Laser Acceleration in Free Space; (6) Spin Dynamics and Siberian Snakes; (7) Symplectic Approximation of Maps; (8) Truncated Power Series Algebra; and (9) Lie Algebra Technique for nonlinear Dynamics. The purpose of these lectures is not to elaborate, but to prepare the students so that they can do their own research. Each topic can be read independently of the others.

  7. Topical and peripheral ketamine as an analgesic.

    Science.gov (United States)

    Sawynok, Jana

    2014-07-01

    Ketamine, in subanesthetic doses, produces systemic analgesia in chronic pain settings, an action largely attributed to block of N-methyl-D-aspartate receptors in the spinal cord and inhibition of central sensitization processes. N-methyl-D-aspartate receptors also are located peripherally on sensory afferent nerve endings, and this provided the initial impetus for exploring peripheral applications of ketamine. Ketamine also produces several other pharmacological actions (block of ion channels and receptors, modulation of transporters, anti-inflammatory effects), and while these may require higher concentrations, after topical (e.g., as gels, creams) and peripheral application (e.g., localized injections), local tissue concentrations are higher than those after systemic administration and can engage lower affinity mechanisms. Peripheral administration of ketamine by localized injection produced some alterations in sensory thresholds in experimental trials in volunteers and in complex regional pain syndrome subjects in experimental settings, but many variables were unaltered. There are several case reports of analgesia after topical application of ketamine given alone in neuropathic pain, but controlled trials have not confirmed such effects. A combination of topical ketamine with several other agents produced pain relief in case, and case series, reports with response rates of 40% to 75% in retrospective analyses. In controlled trials of neuropathic pain with topical ketamine combinations, there were improvements in some outcomes, but optimal dosing and drug combinations were not clear. Given orally (as a gargle, throat swab, localized peritonsillar injections), ketamine produced significant oral/throat analgesia in controlled trials in postoperative settings. Topical analgesics are likely more effective in particular conditions (patient factors, disease factors), and future trials of topical ketamine should include a consideration of factors that could predispose

  8. Topic Time Series Analysis of Microblogs

    Science.gov (United States)

    2014-10-01

    does not display a currently valid OMB control number. 1. REPORT DATE OCT 2014 2. REPORT TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE...Tweets in topic k triggered by a Tweet in topic k, also known as the branching factor. ωk is a parameter controlling the rate of decay, i.e. how...2011. [20] Yosihiko Ogata . Statistical models for earthquake occurrences and residual analysis for point pro- cesses. Journal of the American

  9. Heart block following topical latanoprost treatment

    Science.gov (United States)

    De Smit, Elisabeth; Theodorou, Maria; Hildebrand, Göran Darius; Bloom, Philip

    2011-01-01

    The authors report a case of second degree heart block associated with topical latanoprost treatment. The authors discuss the possibility of a causative effect as the cessation of this treatment resulted in improvement of the arrhythmia. The authors highlight previous reports and research in humans and animals which demonstrate an association of arrhythmias with prostaglandin analogues. This report draws attention to the possibility that an extremely commonly prescribed topical drug may trigger arrhythmias in susceptible individuals. It is important that prescribers are aware of this possible side-effect. PMID:22679164

  10. Applied atomic and collision physics special topics

    CERN Document Server

    Massey, H S W; Bederson, Benjamin

    1982-01-01

    Applied Atomic Collision Physics, Volume 5: Special Topics deals with topics on applications of atomic collisions that were not covered in the first four volumes of the treatise. The book opens with a chapter on ultrasensitive chemical detectors. This is followed by separate chapters on lighting, magnetohydrodynamic electrical power generation, gas breakdown and high voltage insulating gases, thermionic energy converters, and charged particle detectors. Subsequent chapters deal with the operation of multiwire drift and proportional chambers and streamer chambers and their use in high energy p

  11. Topical therapy in atopic dermatitis in children

    Directory of Open Access Journals (Sweden)

    Dharshini Sathishkumar

    2016-01-01

    Full Text Available Atopic dermatitis (AD is a common, chronic childhood skin disorder caused by complex genetic, immunological, and environmental interactions. It significantly impairs quality of life for both child and family. Treatment is complex and must be tailored to the individual taking into account personal, social, and emotional factors, as well as disease severity. This review covers the management of AD in children with topical treatments, focusing on: education and empowerment of patients and caregivers, avoidance of trigger factors, repair and maintenance of the skin barrier by correct use of emollients, control of inflammation with topical corticosteroids and calcineurin inhibitors, minimizing infection, and the use of bandages and body suits.

  12. Safety of topical vancomycin powder in neurosurgery

    Directory of Open Access Journals (Sweden)

    Kalil G Abdullah

    2016-01-01

    Full Text Available Surgical site infections (SSIs remain an important cause of morbidity following neurosurgical procedures despite the best medical practices. In addition, hospital infection rates are proposed as a metric for ranking hospitals safety profiles to guide medical consumerism. Recently, the use of topical vancomycin, defined as the application of vancomycin powder directly into the surgical wound, has been described in both cranial and spinal surgeries as a method to reduce SSIs. Early results are promising. Here, we provide a concise primer on the pharmacology, bacterial spectrum, history, and clinical indications of topical vancomycin for the practicing surgeon.

  13. Topics in current aerosol research (part2)

    CERN Document Server

    Hidy, G M

    1972-01-01

    Topics in Current Aerosol Research, Part 2 contains some selected articles in the field of aerosol study. The chosen topics deal extensively with the theory of diffusiophoresis and thermophoresis. Also covered in the book is the mathematical treatment of integrodifferential equations originating from the theory of aerosol coagulation. The book is the third volume of the series entitled International Reviews in Aerosol Physics and Chemistry. The text offers significant understanding of the methods employed to develop a theory for thermophoretic and diffusiophoretic forces acting on spheres in t

  14. Illustrative EDOF topics in Fourier optics

    Science.gov (United States)

    George, Nicholas; Chen, Xi; Chi, Wanli

    2011-10-01

    In this talk we present a series of illustrative topics in Fourier Optics that are proving valuable in the design of EDOF camera systems. They are at the level of final examination problems that have been made solvable by a student or professoi having studied from one of Joseph W. Goodman's books---our tribute for his 75fr year. As time permits, four illustrative topics are l) Electromagnetic waves and Fourier optics;2) The perfect lens; 3) Connection between phase delay and radially varying focal length in an asphere and 4) tailored EDOF designs.

  15. 75 FR 26647 - Ophthalmic and Topical Dosage Form New Animal Drugs; Ivermectin Topical Solution

    Science.gov (United States)

    2010-05-12

    ... HUMAN SERVICES Food and Drug Administration 21 CFR Part 524 Ophthalmic and Topical Dosage Form New Animal Drugs; Ivermectin Topical Solution AGENCY: Food and Drug Administration, HHS. ACTION: Final rule, technical amendment. SUMMARY: The Food and Drug Administration (FDA) is amending the animal drug...

  16. 76 FR 81806 - Ophthalmic and Topical Dosage Form New Animal Drugs; Ivermectin Topical Solution

    Science.gov (United States)

    2011-12-29

    ... HUMAN SERVICES Food and Drug Administration 21 CFR Part 524 Ophthalmic and Topical Dosage Form New Animal Drugs; Ivermectin Topical Solution AGENCY: Food and Drug Administration, HHS. ACTION: Final rule. SUMMARY: The Food and Drug Administration (FDA) is amending the animal drug regulations to...

  17. Preliminary stop of the TOPical Imiquimod treatment of high-grade Cervical intraepithelial neoplasia (TOPIC) trial

    NARCIS (Netherlands)

    Koeneman, M. M.; Kruse, A. J.; Kooreman, L. F S; zur Hausen, A.; Hopman, A. H N; Sep, S. J S; Van Gorp, T.; Slangen, B. F M; van Beekhuizen, H. J.; van de Sande, A. J M; Gerestein, C. G.; Nijman, H. W.; Kruitwagen, R. F P M

    2017-01-01

    The "TOPical Imiquimod treatment of high-grade Cervical intraepithelial neoplasia" (TOPIC) trial was stopped preliminary, due to lagging inclusions. This study aimed to evaluate the treatment efficacy and clinical applicability of imiquimod 5% cream in high-grade cervical intraepithelial neoplasia (

  18. Considering the Problem of Insider IT Misuse

    Directory of Open Access Journals (Sweden)

    Steven Furnell

    2003-05-01

    Full Text Available In recent years the Internet connection has become a frequent point of attack for most organisations. However, the loss due to insider misuse is far greater than the loss due to external abuse. This paper focuses on the problem of insider misuse, the scale of it, and how it has effected the organisations. The paper also discusses why access controls alone cannot be used to address the problem, and proceeds to consider how techniques currently associated with Intrusion Detection Systems can potentially be applied for insider misuse detection. General guidelines for countermeasures against insider misuse are also provided to protect data and systems.

  19. Quality factors to consider in condensate selection

    Energy Technology Data Exchange (ETDEWEB)

    Lywood, B. [Crude Quality Inc., Edmonton, AB (Canada)

    2009-07-01

    Many factors must be considered when assessing the feasibility of using condensates as a diluent for bitumen or heavy crude production blending. In addition to commercial issues, the effect of condensate quality is a key consideration. In general, condensate quality refers to density and viscosity. However, valuation decisions could be enhanced through the expansion of quality definitions and understanding. This presentation focused on the parameters that are important in choosing a diluent grade product. It also reviewed pipeline and industry specifications and provided additional information regarding general properties for bitumen and condensate compatibility; sampling and quality testing needs; and existing sources of information regarding condensate quality. tabs., figs.

  20. (ReConsidering American Studies in Greece

    Directory of Open Access Journals (Sweden)

    Zoe Detsi-Diamanti

    2006-01-01

    Full Text Available In writing about American studies in Greece, one is tempted to consider for a moment the fact that the field, ever since it placed itself on the international academic map, has been in a constant process of self-discovery and self-becoming. Its openness, which may be taken as evidence for its vibrant existence and its ability to reconstruct and deconstruct itself, has led to new areas of research, new formulations, new critiques, as well as to an essential paradox  : although we currently wit...