WorldWideScience

Sample records for supercomputers shlomo weiss

  1. Trude Weiss-Rosmarin: Rebel with a Cause.

    Science.gov (United States)

    Reed, Barbara Straus

    1995-01-01

    Profiles Trude Weiss-Rosmarin, a German-Jewish immigrant to America who, in 1935, published an independent magazine of Jewish ideas. Notes that the periodical focused on Jewish survival in an assimilationist milieu and the role of Jewish women. States that Weiss-Rosmarin lectured frequently, attracted important readers, and contributed…

  2. Multiple phase transitions in the generalized Curie-Weiss model

    International Nuclear Information System (INIS)

    Eisele, T.; Ellis, R.S.

    1988-01-01

    The generalized Curie-Weiss model is an extension of the classical Curie-Weiss model in which the quadratic interaction function of the mean spin value is replaced by a more general interaction function. It is shown that the generalized Curie-Weiss model can have a sequence of phase transitions at different critical temperatures. Both first-order and second-order phase transitions can occur, and explicit criteria for the two types are given. Three examples of generalized Curie-Weiss models are worked out in detail, including one example with infinitely many phase transitions. A number of results are derived using large-deviation techniques

  3. What is supercomputing ?

    International Nuclear Information System (INIS)

    Asai, Kiyoshi

    1992-01-01

    Supercomputing means the high speed computation using a supercomputer. Supercomputers and the technical term ''supercomputing'' have spread since ten years ago. The performances of the main computers installed so far in Japan Atomic Energy Research Institute are compared. There are two methods to increase computing speed by using existing circuit elements, parallel processor system and vector processor system. CRAY-1 is the first successful vector computer. Supercomputing technology was first applied to meteorological organizations in foreign countries, and to aviation and atomic energy research institutes in Japan. The supercomputing for atomic energy depends on the trend of technical development in atomic energy, and the contents are divided into the increase of computing speed in existing simulation calculation and the acceleration of the new technical development of atomic energy. The examples of supercomputing in Japan Atomic Energy Research Institute are reported. (K.I.)

  4. [Psychoanalysis and facism: two incompatible approaches. The difficult role of Edoardo Weiss].

    Science.gov (United States)

    Accerboni, A M

    1988-01-01

    Edoardo Weiss, the only direct disciple of Freud in Italy, returned to Trieste, his native town, in 1919 after a long period of psychoanalitical studies in Vienna. An enthusiastic acceptance of Freud's ideas in the cultural, mainly Jewish, circles in Trieste was parallel to a sort of distrust of the Fascist ideology. In 1930 Weiss decided to move to Rome where he hoped to be able to found an Italian psychoanalytical movement. The Catholic Church, Fascist Ideology, philosophical Idealism and scientific Positivism were all factors hampering the spread of psychoanalysis in Italy. In 1932 Weiss founded the Italian Psychoanalytical Society in Rome with a very small number of followers. The relations between Weiss' newborn Society and the dictatorship were going to be quite troublesome. Ernst Jones was drastically accused by Weiss of misrepresenting his entrée with Mussolini. Thanks to Weiss' efforts the Italian society was acknowledged by the I.P.A. Finally, mention will be made of Weiss' forced move to America as a result of the racial laws, and of the consequences for the future of Psychoanalysis in Italy.

  5. WAIS-IV and WISC-IV Structural Validity: Alternate Methods, Alternate Results. Commentary on Weiss et al. (2013a) and Weiss et al. (2013b)

    Science.gov (United States)

    Canivez, Gary L.; Kush, Joseph C.

    2013-01-01

    Weiss, Keith, Zhu, and Chen (2013a) and Weiss, Keith, Zhu, and Chen (2013b), this issue, report examinations of the factor structure of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) and Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV), respectively; comparing Wechsler Hierarchical Model (W-HM) and…

  6. Supercomputational science

    CERN Document Server

    Wilson, S

    1990-01-01

    In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School ...

  7. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee

    2011-11-15

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  8. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee; Kaushik, Dinesh; Winfer, Andrew

    2011-01-01

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  9. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  10. Genetics Home Reference: Jackson-Weiss syndrome

    Science.gov (United States)

    ... Jabs EW, Li X, Scott AF, Meyers G, Chen W, Eccles M, Mao JI, Charnas LR, Jackson CE, Jaye M. Jackson-Weiss and Crouzon syndromes are allelic with mutations in fibroblast growth factor receptor 2. Nat Genet. 1994 Nov;8(3):275-9. Erratum in: Nat Genet 1995 Apr;9(4):451. Citation on PubMed Robin ...

  11. Computational Dimensionalities of Global Supercomputing

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2013-12-01

    Full Text Available This Invited Paper pertains to subject of my Plenary Keynote Speech at the 17th World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI 2013 held in Orlando, Florida on July 9-12, 2013. The title of my Plenary Keynote Speech was: "Dimensionalities of Computation: from Global Supercomputing to Data, Text and Web Mining" but this Invited Paper will focus only on the "Computational Dimensionalities of Global Supercomputing" and is based upon a summary of the contents of several individual articles that have been previously written with myself as lead author and published in [75], [76], [77], [78], [79], [80] and [11]. The topics of these of the Plenary Speech included Overview of Current Research in Global Supercomputing [75], Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing [76], Data Mining Supercomputing with SAS™ JMP® Genomics ([77], [79], [80], and Visualization by Supercomputing Data Mining [81]. ______________________ [11.] Committee on the Future of Supercomputing, National Research Council (2003, The Future of Supercomputing: An Interim Report, ISBN-13: 978-0-309-09016- 2, http://www.nap.edu/catalog/10784.html [75.] Segall, Richard S.; Zhang, Qingyu and Cook, Jeffrey S.(2013, "Overview of Current Research in Global Supercomputing", Proceedings of Forty- Fourth Meeting of Southwest Decision Sciences Institute (SWDSI, Albuquerque, NM, March 12-16, 2013. [76.] Segall, Richard S. and Zhang, Qingyu (2010, "Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing", Proceedings of 5th INFORMS Workshop on Data Mining and Health Informatics, Austin, TX, November 6, 2010. [77.] Segall, Richard S., Zhang, Qingyu and Pierce, Ryan M.(2010, "Data Mining Supercomputing with SAS™ JMP®; Genomics: Research-in-Progress, Proceedings of 2010 Conference on Applied Research in Information Technology, sponsored by

  12. Japanese supercomputer technology

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Ewald, R.H.; Worlton, W.J.

    1982-01-01

    In February 1982, computer scientists from the Los Alamos National Laboratory and Lawrence Livermore National Laboratory visited several Japanese computer manufacturers. The purpose of these visits was to assess the state of the art of Japanese supercomputer technology and to advise Japanese computer vendors of the needs of the US Department of Energy (DOE) for more powerful supercomputers. The Japanese foresee a domestic need for large-scale computing capabilities for nuclear fusion, image analysis for the Earth Resources Satellite, meteorological forecast, electrical power system analysis (power flow, stability, optimization), structural and thermal analysis of satellites, and very large scale integrated circuit design and simulation. To meet this need, Japan has launched an ambitious program to advance supercomputer technology. This program is described

  13. An assessment of worldwide supercomputer usage

    Energy Technology Data Exchange (ETDEWEB)

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  14. Status of supercomputers in the US

    International Nuclear Information System (INIS)

    Fernbach, S.

    1985-01-01

    Current Supercomputers; that is, the Class VI machines which first became available in 1976 are being delivered in greater quantity than ever before. In addition, manufacturers are busily working on Class VII machines to be ready for delivery in CY 1987. Mainframes are being modified or designed to take on some features of the supercomputers and new companies with the intent of either competing directly in the supercomputer arena or in providing entry-level systems from which to graduate to supercomputers are springing up everywhere. Even well founded organizations like IBM and CDC are adding machines with vector instructions in their repertoires. Japanese - manufactured supercomputers are also being introduced into the U.S. Will these begin to compete with those of U.S. manufacture. Are they truly competitive. It turns out that both from the hardware and software points of view they may be superior. We may be facing the same problems in supercomputers that we faced in videosystems

  15. Supercomputing and related national projects in Japan

    International Nuclear Information System (INIS)

    Miura, Kenichi

    1985-01-01

    Japanese supercomputer development activities in the industry and research projects are outlined. Architecture, technology, software, and applications of Fujitsu's Vector Processor Systems are described as an example of Japanese supercomputers. Applications of supercomputers to high energy physics are also discussed. (orig.)

  16. Effect of shallow donors on Curie–Weiss temperature of Co-doped ZnO

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shuxia, E-mail: gsx0391@sina.com [Department of Physics, Jiaozuo Teachers College, Jiaozuo 454001 (China); Key Laboratory for Special Functional Materials of Ministry of Education, Henan University, Kaifeng 475004 (China); Li, Jiwu [Department of Physics, Jiaozuo Teachers College, Jiaozuo 454001 (China); Du, Zuliang [Key Laboratory for Special Functional Materials of Ministry of Education, Henan University, Kaifeng 475004 (China)

    2014-12-15

    Co-doped ZnO and Al, Co co-doped ZnO polycrystalline powders were synthesized by co-precipitation method. The magnetization curves measured at 2 K show no hysteresis neither remanence for all samples. ZnO:Co grown at low temperature has a positive Curie–Weiss temperature Θ, and ZnO:Co grown at high temperature has a negative Θ. But Al-doped ZnO:Co grown at high temperature has a positive Θ. Positive Curie–Weiss temperature Θ was considered to have relation to the presence of shallow donors in the samples. - Highlights: • Co-doped ZnO and Al, Co co-doped ZnO polycrystalline powders were synthesized. • No hysteresis is observed for all samples. • The Curie–Weiss temperature Θ changes its sign by Al doping. • Positive Θ should be related to shallow donors.

  17. TOP500 Supercomputers for June 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  18. TOP500 Supercomputers for June 2005

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  19. TOP500 Supercomputers for November 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  20. A training program for scientific supercomputing users

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  1. Dynamics of quantum measurements employing two Curie-Weiss apparatuses

    Science.gov (United States)

    Perarnau-Llobet, Martí; Nieuwenhuizen, Theodorus Maria

    2017-10-01

    Two types of quantum measurements, measuring the spins of an entangled pair and attempting to measure a spin at either of two positions, are analysed dynamically by apparatuses of the Curie-Weiss type. The outcomes comply with the standard postulates. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  2. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  3. Flatfoot in Müller-Weiss syndrome: a case series

    Directory of Open Access Journals (Sweden)

    Wang Xu

    2012-08-01

    Full Text Available Abstract Introduction Spontaneous osteonecrosis of the navicular bone in adults is a rare entity, known as Müller-Weiss syndrome. We report here on our experience with six patients with Müller-Weiss syndrome accompanied by flatfoot deformity, but on a literature search found no reports on this phenomenon. Because the natural history and treatment are controversial, an understanding of how to manage this deformity may be helpful for surgeons when choosing the most appropriate operative procedure. Case presentation Six patients (five women, one man; average age, 54 years with flatfoot caused by osteonecrosis of the navicular bone were followed up between January 2005 and December 2008 (mean follow-up period, 23.2 months. Conservative treatment, such as physical therapy, and non-steroidal anti-inflammatory drugs were used, but failed. Physical examinations revealed flattening of the medial arch of the involved foot and mild tenderness at the mid-tarsal joint. Weight-bearing X-rays (anterior-posterior and lateral views, computed tomography, and MRI scans were performed for each case. Talonavicular joint arthrodesis was performed in cases of single talonavicular joint arthritis. Triple arthrodesis was performed in cases of triple joint arthritis to reconstruct the medial arch. Clinical outcomes were assessed using the American Orthopaedic Foot and Ankle Society ankle-hindfoot scale; the scores were 63.0 pre-operatively and 89.8 post-operatively. All patients developed bony fusion. Conclusions The reason for the development of flatfoot in patients with Müller-Weiss syndrome is unknown. Surgical treatment may achieve favorable outcomes in terms of deformity correction, pain relief, and functional restoration. The choice of operative procedure may differ in patients with both flatfoot and posterior tibial tendon dysfunction.

  4. Misjudging frustrations in spin liquids from oversimplified use of Curie-Weiss law

    Energy Technology Data Exchange (ETDEWEB)

    Nag, Abhishek, E-mail: msan@iacs.res.in [Department of Materials Science, Indian Association for the Cultivation of Science, Jadavpur, Kolkata 700032 (India); Ray, Sugata [Department of Materials Science, Indian Association for the Cultivation of Science, Jadavpur, Kolkata 700032 (India); Centre for Advanced Materials, Indian Association for the Cultivation of Science, Jadavpur, Kolkata 700032 (India)

    2017-02-15

    Absence of a single smoking-gun experiment to identify a quantum spin liquid, has kept their characterisation difficult till date. Featureless dc magnetic susceptibility and large antiferromagnetic frustration are always considered as the essential pointers to these systems. However, we show that the amount of frustration estimated by using generalised Curie-Weiss law on these susceptibility data are prone to errors and thus should be dealt with caution. We measure and analyse susceptibility data of Ba{sub 3}ZnIr{sub 2}O{sub 9}, a spin orbital liquid candidate and Gd{sub 2}O{sub 3}, a 1.5 K antiferromagnet and show the distinguishing features between them. A continuous and significant change in Curie and Weiss constants is seen to take place in Ba{sub 3}ZnIr{sub 2}O{sub 9} and other reported spin liquids with the change in the range of fitting temperatures showing the need of a temperature ‘range-of-fit’ analysis before commenting on the Weiss constants of spin liquids. The variation observed is similar to fluctuations among topological sectors persisting over a range of temperature in spin-ice candidates. On the other hand, even though we find correlations to exist at even 100 times the ordering temperature in Gd{sub 2}O{sub 3}, no such fluctuation is observed which may be used as an additional distinguishing signature of spin liquids over similarly featureless correlated paramagnets. - Highlights: • Curie-Weiss fitting may give erroneous frustration parameters in spin-liquids. • The results depend upon choice of fitting method and temperature range used. • More appropriate method is to use a ʽrange of fit’ analysis. • Can distinguish between spin-liquids and correlated paramagnets.

  5. World's fastest supercomputer opens up to users

    Science.gov (United States)

    Xin, Ling

    2016-08-01

    China's latest supercomputer - Sunway TaihuLight - has claimed the crown as the world's fastest computer according to the latest TOP500 list, released at the International Supercomputer Conference in Frankfurt in late June.

  6. Weiss oscillations in the electronic structure of modulated graphene

    International Nuclear Information System (INIS)

    Tahir, M; Sabeeh, K; MacKinnon, A

    2007-01-01

    We present a theoretical study of the electronic structure of modulated graphene in the presence of a perpendicular magnetic field. The density of states and the bandwidth for the Dirac electrons in this system are determined. The appearance of unusual Weiss oscillations in the bandwidth and density of states is the main focus of this work

  7. On the equivalence of dilute antiferromagnets and ferromagnets in random external fields: Curie-Weiss models

    International Nuclear Information System (INIS)

    Perez, J.F.; Pontin, L.F.; Segundo, J.A.B.

    1985-01-01

    Using a method proposed by van Hemmen the free energy of the Curie-Weiss version of the site-dilute antiferromagnetic Ising model is computed, in the presence of an uniform magnetic field. The solution displays an exact correspondence between this model and the Curie-Weiss version of the Ising model in the presence of a random magnetic field. The phase diagrams are discussed and a tricritical point is shown to exist. (Author) [pt

  8. OpenMP Performance on the Columbia Supercomputer

    Science.gov (United States)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  9. Supercomputing - Use Cases, Advances, The Future (1/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the first day, we will focus on the history and theory of supercomputing, the top500 list and the hardware that makes supercomputers tick. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP an...

  10. Supercomputing - Use Cases, Advances, The Future (2/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the second day, we will focus on software and software paradigms driving supercomputers, workloads that need supercomputing treatment, advances in technology and possible future developments. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and i...

  11. Mortality in high-risk patients with bleeding Mallory-Weiss syndrome is similar to that of peptic ulcer bleeding. Results of a prospective database study.

    Science.gov (United States)

    Ljubičić, Neven; Budimir, Ivan; Pavić, Tajana; Bišćanin, Alen; Puljiz, Zeljko; Bratanić, Andre; Troskot, Branko; Zekanović, Dražen

    2014-04-01

    The aim of this study was to identify the predictive factors influencing mortality in patients with bleeding Mallory-Weiss syndrome in comparison with peptic ulcer bleeding. Between January 2005 and December 2009, 281 patients with endoscopically confirmed Mallory-Weiss syndrome and 1530 patients with peptic ulcer bleeding were consecutively evaluated. The 30-day mortality and clinical outcome were related to the patients' demographic data, endoscopic, and clinical characteristics. The one-year cumulative incidence for bleeding Mallory-Weiss syndrome was 7.3 cases/100,000 people and for peptic ulcer bleeding 40.4 cases/100,000 people. The age-standardized incidence for both bleeding Mallory-Weiss syndrome and peptic ulcer bleeding remained unchanged during the observational five-year period. The majority of patients with bleeding Mallory-Weiss syndrome were male patients with significant overall comorbidities (ASA class 3-4). Overall 30-day mortality rate was 5.3% for patients with bleeding Mallory-Weiss syndrome and 4.6% for patients with peptic ulcer bleeding (p = 0.578). In both patients with bleeding Mallory-Weiss syndrome and peptic ulcer bleeding, mortality was significantly higher in patients over 65 years of age and those with significant overall comorbidities (ASA class 3-4). The incidence of bleeding Mallory-Weiss syndrome and peptic ulcer bleeding has not changed over a five-year observational period. The overall 30-day mortality was almost equal for both bleeding Mallory-Weiss syndrome and peptic ulcer bleeding and was positively correlated to older age and underlying comorbid illnesses.

  12. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  13. Desktop supercomputer: what can it do?

    Science.gov (United States)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  14. Adaptability of supercomputers to nuclear computations

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Ishiguro, Misako; Matsuura, Toshihiko.

    1983-01-01

    Recently in the field of scientific and technical calculation, the usefulness of supercomputers represented by CRAY-1 has been recognized, and they are utilized in various countries. The rapid computation of supercomputers is based on the function of vector computation. The authors investigated the adaptability to vector computation of about 40 typical atomic energy codes for the past six years. Based on the results of investigation, the adaptability of the function of vector computation that supercomputers have to atomic energy codes, the problem regarding the utilization and the future prospect are explained. The adaptability of individual calculation codes to vector computation is largely dependent on the algorithm and program structure used for the codes. The change to high speed by pipeline vector system, the investigation in the Japan Atomic Energy Research Institute and the results, and the examples of expressing the codes for atomic energy, environmental safety and nuclear fusion by vector are reported. The magnification of speed up for 40 examples was from 1.5 to 9.0. It can be said that the adaptability of supercomputers to atomic energy codes is fairly good. (Kako, I.)

  15. Desktop supercomputer: what can it do?

    International Nuclear Information System (INIS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-01-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  16. TOP500 Supercomputers for November 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  17. TOP500 Supercomputers for June 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  18. TOP500 Supercomputers for June 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  19. The Weiss molecular field and the local molecular field; Le champ moleculaire de Weiss et le champ moleculaire local

    Energy Technology Data Exchange (ETDEWEB)

    Neel, L [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires; Faculte des Sciences de Grenoble, 38 (France)

    1959-07-01

    Initially, the present report outlines the work done by P. Weiss in the molecular field and spontaneous magnetization theory. It then stresses the success of the theory in the interpretation of the magnetic and energetic properties of ferro-magnetic bodies, and indicates recent progress: 'the energetic molecular field, and the corrective molecular field of the equation of state'. In the second part, the author reviews the difficulties encountered by the theory, and shows how they were overcome by the introduction of the notion of the 'local molecular field', thus supplying the key to the properties of anti-ferro and ferri-magnetic bodies. The present level of progress in the interpretation of the magnetic properties of pyrrhotite, which played a major part in the molecular field discoveries, is also discussed in paragraph 4 and appendices. (author) [French] Dans une premiere partie, l'auteur retrace l'historique des travaux de P. Weiss sur la theorie du champ moleculaire et l'aimantation spontanee; il en souligne les succes dans l'interpretation des proprietes magnetiques et energetiques des corps ferro-magnetiques et les perfectionnements ultimes: champ moleculaire energetique et champ moleculaire correctif de l'equation d'etat. Dans une deuxieme partie, apres avoir examine les difficultes auxquelles se heurtait la theorie, l'auteur montre qu'elles ont ete resolues en introduisant la notion de champ moleculaire local et en donnant ainsi la cle des proprietes des corps antiferro-et ferri-magnetiques. II etudie egalement (paragraphe 4 et Appendice) l'etat actuel de l'interpretation des proprietes magnetiques de la pyrrhotine qui a joue un grand role dans l'histoire du champ moleculaire. (auteur)

  20. Status reports of supercomputing astrophysics in Japan

    International Nuclear Information System (INIS)

    Nakamura, Takashi; Nagasawa, Mikio

    1990-01-01

    The Workshop on Supercomputing Astrophysics was held at National Laboratory for High Energy Physics (KEK, Tsukuba) from August 31 to September 2, 1989. More than 40 participants of physicists, astronomers were attendant and discussed many topics in the informal atmosphere. The main purpose of this workshop was focused on the theoretical activities in computational astrophysics in Japan. It was also aimed to promote effective collaboration between the numerical experimentists working on supercomputing technique. The various subjects of the presented papers of hydrodynamics, plasma physics, gravitating systems, radiative transfer and general relativity are all stimulating. In fact, these numerical calculations become possible now in Japan owing to the power of Japanese supercomputer such as HITAC S820, Fujitsu VP400E and NEC SX-2. (J.P.N.)

  1. Mario Weiss (1927 – 2008)

    CERN Multimedia

    2008-01-01

    It was with great sadness that we learned that our colleague and friend Mario Weiss passed away on February 11th. A feeling of emptiness overtook us all. Mario was a reassuring reference in the small community of linear accelerator experts, as he continued to come to CERN regularly and discuss accelerator problems with passion for many years after his official retirement. Mario came to CERN in 1960 and in the PS Division worked on beam dynamics of low-energy high-intensity proton beams, soon becoming a world-level expert in the field. He took an active part in the construction of Linac2, where he was responsible for the low-energy beam transport system. In the early 80’s he turned his interest to the Radio Frequency Quadrupole (RFQ), a novel concept for acceleration which allows the problems related with bunching and injecting low-energy beams to be overcomed. After starting a fruitful collaboration with the Los Alamos scientists, ...

  2. Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data

    Science.gov (United States)

    Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.

    2018-03-01

    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.

  3. Comments on the parallelization efficiency of the Sunway TaihuLight supercomputer

    OpenAIRE

    Végh, János

    2016-01-01

    In the world of supercomputers, the large number of processors requires to minimize the inefficiencies of parallelization, which appear as a sequential part of the program from the point of view of Amdahl's law. The recently suggested new figure of merit is applied to the recently presented supercomputer, and the timeline of "Top 500" supercomputers is scrutinized using the metric. It is demonstrated, that in addition to the computing performance and power consumption, the new supercomputer i...

  4. The ETA10 supercomputer system

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, Inc. ETA 10 is a next-generation supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamics MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed. (orig.)

  5. Integration of Panda Workload Management System with supercomputers

    Science.gov (United States)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  6. Applications of supercomputing and the utility industry: Calculation of power transfer capabilities

    International Nuclear Information System (INIS)

    Jensen, D.D.; Behling, S.R.; Betancourt, R.

    1990-01-01

    Numerical models and iterative simulation using supercomputers can furnish cost-effective answers to utility industry problems that are all but intractable using conventional computing equipment. An example of the use of supercomputers by the utility industry is the determination of power transfer capability limits for power transmission systems. This work has the goal of markedly reducing the run time of transient stability codes used to determine power distributions following major system disturbances. To date, run times of several hours on a conventional computer have been reduced to several minutes on state-of-the-art supercomputers, with further improvements anticipated to reduce run times to less than a minute. In spite of the potential advantages of supercomputers, few utilities have sufficient need for a dedicated in-house supercomputing capability. This problem is resolved using a supercomputer center serving a geographically distributed user base coupled via high speed communication networks

  7. Supercomputers to transform Science

    CERN Multimedia

    2006-01-01

    "New insights into the structure of space and time, climate modeling, and the design of novel drugs, are but a few of the many research areas that will be transforned by the installation of three supercomputers at the Unversity of Bristol." (1/2 page)

  8. Convex unwraps its first grown-up supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Manuel, T.

    1988-03-03

    Convex Computer Corp.'s new supercomputer family is even more of an industry blockbuster than its first system. At a tenfold jump in performance, it's far from just an incremental upgrade over its first minisupercomputer, the C-1. The heart of the new family, the new C-2 processor, churning at 50 million floating-point operations/s, spawns a group of systems whose performance could pass for some fancy supercomputers-namely those of the Cray Research Inc. family. When added to the C-1, Convex's five new supercomputers create the C series, a six-member product group offering a performance range from 20 to 200 Mflops. They mark an important transition for Convex from a one-product high-tech startup to a multinational company with a wide-ranging product line. It's a tough transition but the Richardson, Texas, company seems to be doing it. The extended product line propels Convex into the upper end of the minisupercomputer class and nudges it into the low end of the big supercomputers. It positions Convex in an uncrowded segment of the market in the $500,000 to $1 million range offering 50 to 200 Mflops of performance. The company is making this move because the minisuper area, which it pioneered, quickly became crowded with new vendors, causing prices and gross margins to drop drastically.

  9. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  10. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  11. The ETA systems plans for supercomputers

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, is a class VII supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamic MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed

  12. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  13. PNNL supercomputer to become largest computing resource on the Grid

    CERN Multimedia

    2002-01-01

    Hewlett Packard announced that the US DOE Pacific Northwest National Laboratory will connect a 9.3-teraflop HP supercomputer to the DOE Science Grid. This will be the largest supercomputer attached to a computer grid anywhere in the world (1 page).

  14. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    Energy Technology Data Exchange (ETDEWEB)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S [Earth Sciences Department. Barcelona Supercomputing Center. Barcelona (Spain); Cuevas, E [Izanaa Atmospheric Research Center. Agencia Estatal de Meteorologia, Tenerife (Spain); Nickovic, S [Atmospheric Research and Environment Branch, World Meteorological Organization, Geneva (Switzerland)], E-mail: carlos.perez@bsc.es

    2009-03-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  15. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    International Nuclear Information System (INIS)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S; Cuevas, E; Nickovic, S

    2009-01-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  16. Pointwise Multipliers on Spaces of Homogeneous Type in the Sense of Coifman and Weiss

    Directory of Open Access Journals (Sweden)

    Yanchang Han

    2014-01-01

    homogeneous type in the sense of Coifman and Weiss, pointwise multipliers of inhomogeneous Besov and Triebel-Lizorkin spaces are obtained. We make no additional assumptions on the quasi-metric or the doubling measure. Hence, the results of this paper extend earlier related results to a more general setting.

  17. Supercomputers Of The Future

    Science.gov (United States)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  18. NASA Advanced Supercomputing Facility Expansion

    Science.gov (United States)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  19. ATLAS Software Installation on Supercomputers

    CERN Document Server

    Undrus, Alexander; The ATLAS collaboration

    2018-01-01

    PowerPC and high performance computers (HPC) are important resources for computing in the ATLAS experiment. The future LHC data processing will require more resources than Grid computing, currently using approximately 100,000 cores at well over 100 sites, can provide. Supercomputers are extremely powerful as they use resources of hundreds of thousands CPUs joined together. However their architectures have different instruction sets. ATLAS binary software distributions for x86 chipsets do not fit these architectures, as emulation of these chipsets results in huge performance loss. This presentation describes the methodology of ATLAS software installation from source code on supercomputers. The installation procedure includes downloading the ATLAS code base as well as the source of about 50 external packages, such as ROOT and Geant4, followed by compilation, and rigorous unit and integration testing. The presentation reports the application of this procedure at Titan HPC and Summit PowerPC at Oak Ridge Computin...

  20. JINR supercomputer of the module type for event parallel analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    A model of a supercomputer with 50 million of operations per second is suggested. Its realization allows one to solve JINR data analysis problems for large spectrometers (in particular DELPHY collaboration). The suggested module supercomputer is based on 32-bit commercial available microprocessor with a processing rate of about 1 MFLOPS. The processors are combined by means of VME standard busbars. MicroVAX-11 is a host computer organizing the operation of the system. Data input and output is realized via microVAX-11 computer periphery. Users' software is based on the FORTRAN-77. The supercomputer is connected with a JINR net port and all JINR users get an access to the suggested system

  1. Supercomputers and quantum field theory

    International Nuclear Information System (INIS)

    Creutz, M.

    1985-01-01

    A review is given of why recent simulations of lattice gauge theories have resulted in substantial demands from particle theorists for supercomputer time. These calculations have yielded first principle results on non-perturbative aspects of the strong interactions. An algorithm for simulating dynamical quark fields is discussed. 14 refs

  2. Fractal fluctuations and quantum-like chaos in the brain by analysis of variability of brain waves: A new method based on a fractal variance function and random matrix theory: A link with El Naschie fractal Cantorian space-time and V. Weiss and H. Weiss golden ratio in brain

    International Nuclear Information System (INIS)

    Conte, Elio; Khrennikov, Andrei; Federici, Antonio; Zbilut, Joseph P.

    2009-01-01

    We develop a new method for analysis of fundamental brain waves as recorded by the EEG. To this purpose we introduce a Fractal Variance Function that is based on the calculation of the variogram. The method is completed by using Random Matrix Theory. Some examples are given. We also discuss the link of such formulation with H. Weiss and V. Weiss golden ratio found in the brain, and with El Naschie fractal Cantorian space-time theory.

  3. Spin-flip dynamics of the Curie-Weiss model Loss of Gibbsianness with possibly broken symmetry.

    CERN Document Server

    Külske, C

    2005-01-01

    We study the conditional probabilities of the Curie-Weiss Ising model in vanishing external field under a symmetric independent stochastic spin-flip dynamics and discuss their set of bad configurations (points of discontinuity). We exhibit a complete analysis of the transition between Gibbsian and non-Gibbsian behavior as a function of time, extending the results for the corresponding lattice model, where only partial answers can be obtained. For initial inverse temperature $\\b \\leq 1$, we prove that the time-evolved measure is always Gibbsian. For $1 \\frac{3}{2}$, we observe the new phenomenon of symmetry-breaking of bad configurations: The time-evolved measure loses its Gibbsian character at a sharp transition time, and bad configurations with non-zero spin-average appear. These bad configurations merge into a neutral configuration at a later transition time, while the measure stays non-Gibbs. In our proof we give a detailed analysis of the phase-diagram of a Curie-Weiss random field Ising model with possi...

  4. Supercomputer applications in nuclear research

    International Nuclear Information System (INIS)

    Ishiguro, Misako

    1992-01-01

    The utilization of supercomputers in Japan Atomic Energy Research Institute is mainly reported. The fields of atomic energy research which use supercomputers frequently and the contents of their computation are outlined. What is vectorizing is simply explained, and nuclear fusion, nuclear reactor physics, the hydrothermal safety of nuclear reactors, the parallel property that the atomic energy computations of fluids and others have, the algorithm for vector treatment and the effect of speed increase by vectorizing are discussed. At present Japan Atomic Energy Research Institute uses two systems of FACOM VP 2600/10 and three systems of M-780. The contents of computation changed from criticality computation around 1970, through the analysis of LOCA after the TMI accident, to nuclear fusion research, the design of new type reactors and reactor safety assessment at present. Also the method of using computers advanced from batch processing to time sharing processing, from one-dimensional to three dimensional computation, from steady, linear to unsteady nonlinear computation, from experimental analysis to numerical simulation and so on. (K.I.)

  5. Computational plasma physics and supercomputers

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1984-09-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular codes, but parallel processing poses new coding difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematics

  6. Mistral Supercomputer Job History Analysis

    OpenAIRE

    Zasadziński, Michał; Muntés-Mulero, Victor; Solé, Marc; Ludwig, Thomas

    2018-01-01

    In this technical report, we show insights and results of operational data analysis from petascale supercomputer Mistral, which is ranked as 42nd most powerful in the world as of January 2018. Data sources include hardware monitoring data, job scheduler history, topology, and hardware information. We explore job state sequences, spatial distribution, and electric power patterns.

  7. Interactive real-time nuclear plant simulations on a UNIX based supercomputer

    International Nuclear Information System (INIS)

    Behling, S.R.

    1990-01-01

    Interactive real-time nuclear plant simulations are critically important to train nuclear power plant engineers and operators. In addition, real-time simulations can be used to test the validity and timing of plant technical specifications and operational procedures. To accurately and confidently simulate a nuclear power plant transient in real-time, sufficient computer resources must be available. Since some important transients cannot be simulated using preprogrammed responses or non-physical models, commonly used simulation techniques may not be adequate. However, the power of a supercomputer allows one to accurately calculate the behavior of nuclear power plants even during very complex transients. Many of these transients can be calculated in real-time or quicker on the fastest supercomputers. The concept of running interactive real-time nuclear power plant transients on a supercomputer has been tested. This paper describes the architecture of the simulation program, the techniques used to establish real-time synchronization, and other issues related to the use of supercomputers in a new and potentially very important area. (author)

  8. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy; Katz, Daniel S.; Binkowski, T. Andrew; Zhong, Xiaoliang; Heinonen, Olle; Karpeyev, Dmitry; Wilde, Michael

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt's sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.

  9. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    Science.gov (United States)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  10. DOE Zero Energy Ready Home Case Study: Weiss Building & Development, Downers Grove, Illinois

    Energy Technology Data Exchange (ETDEWEB)

    none,

    2013-09-01

    This single-family home built in a peat bog has underground storage tanks and drainage tanks, blown fiberglass insulation, coated rigid polyisocyanurate, and flashing. The 3,600-square-foot custom home built by Weiss Building & Development LLC is the first home in Illinois certified to the DOE Challenge Home criteria, which requires that homes meet the EPA Indoor airPlus guidelines.The builder won a 2013 Housing Innovation Award in the custom builder category.

  11. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  12. Introduction to Reconfigurable Supercomputing

    CERN Document Server

    Lanzagorta, Marco; Rosenberg, Robert

    2010-01-01

    This book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPe who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigur

  13. SUPERCOMPUTERS FOR AIDING ECONOMIC PROCESSES WITH REFERENCE TO THE FINANCIAL SECTOR

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2014-12-01

    Full Text Available The article discusses the use of supercomputers to support business processes with particular emphasis on the financial sector. A reference was made to the selected projects that support economic development. In particular, we propose the use of supercomputers to perform artificial intel-ligence methods in banking. The proposed methods combined with modern technology enables a significant increase in the competitiveness of enterprises and banks by adding new functionality.

  14. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jacobsen, Douglas W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ringler, Todd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  15. Visualization environment of the large-scale data of JAEA's supercomputer system

    Energy Technology Data Exchange (ETDEWEB)

    Sakamoto, Kensaku [Japan Atomic Energy Agency, Center for Computational Science and e-Systems, Tokai, Ibaraki (Japan); Hoshi, Yoshiyuki [Research Organization for Information Science and Technology (RIST), Tokai, Ibaraki (Japan)

    2013-11-15

    On research and development of various fields of nuclear energy, visualization of calculated data is especially useful to understand the result of simulation in an intuitive way. Many researchers who run simulations on the supercomputer in Japan Atomic Energy Agency (JAEA) are used to transfer calculated data files from the supercomputer to their local PCs for visualization. In recent years, as the size of calculated data has gotten larger with improvement of supercomputer performance, reduction of visualization processing time as well as efficient use of JAEA network is being required. As a solution, we introduced a remote visualization system which has abilities to utilize parallel processors on the supercomputer and to reduce the usage of network resources by transferring data of intermediate visualization process. This paper reports a study on the performance of image processing with the remote visualization system. The visualization processing time is measured and the influence of network speed is evaluated by varying the drawing mode, the size of visualization data and the number of processors. Based on this study, a guideline for using the remote visualization system is provided to show how the system can be used effectively. An upgrade policy of the next system is also shown. (author)

  16. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  17. QCD on the BlueGene/L Supercomputer

    International Nuclear Information System (INIS)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-01-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented

  18. QCD on the BlueGene/L Supercomputer

    Science.gov (United States)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-03-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.

  19. Development of seismic tomography software for hybrid supercomputers

    Science.gov (United States)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on

  20. Study of Y and Lu iron garnets using Bethe-Peierls-Weiss method

    Science.gov (United States)

    Goveas, Neena; Mukhopadhyay, G.; Mukhopadhyay, P.

    1994-11-01

    We study here the magnetic properties of Y- and Lu- Iron Garnets using the Bethe- Peierls-Weiss method modified to suit complex systems like these Garnets. We consider these Garnets as described by Heisenberg Hamiltonian with two sublattices (a,d) and determine the exchange interaction parameters Jad, Jaa and Jdd by matching the exerimental susceptibility curves. We find Jaa and Jdd to be much smaller than those determined by Néel theory, and consistent with those obtained by the study of spin wave spectra; the spin wave dispersion relation constant obtained using these parameters gives good agreement with the experimental values.

  1. Proceedings of the first energy research power supercomputer users symposium

    International Nuclear Information System (INIS)

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. ''Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University

  2. Graphics supercomputer for computational fluid dynamics research

    Science.gov (United States)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  3. Vanda Shrenger Weiss - the Croatian pioneer between two worlds: Her role in the birth of the Italian Psychoanalytic Society (SPI).

    Science.gov (United States)

    Corsa, Rita

    2017-08-01

    In this paper the author sheds light on Vanda Shrenger Weiss, a forgotten pioneer of the international psychoanalytic movement. Vanda Shrenger was born into a large Jewish family in Croatia (1892), and her life was thoroughly intertwined with the great tragedies of European history: the First World War, the anti-Semitic persecution within Eastern Europe, which entailed the decimation of her extended family in Croatia. Finally, the introduction of fascist laws in Italy led to her and her husband - Edoardo Weiss, the founder of the Italian Psychoanalytic Society - seeking refuge in the United States of America. During her time spent in Italy (1919-39), Vanda Shrenger, doctor and paediatrician, dedicated herself to psychoanalysis. She played a crucial part in the reconstruction of the Italian Psychoanalytic Society (SPI), whilst also being a founding member of the Rivista Italiana di Psicoanalisi (Rome, 1932). Vanda was the first woman to be a member of the SPI as well as to present a paper for it. This insightful and extensive analysis relating to this pioneer of the psychoanalytic world, has been meticulously accomplished by use of a combination of original archival materials, along with access to previously unpublished documents and personal details, kindly made available to the author by Marianna, the daughter of Vanda and Edoardo Weiss, who still lives in the United States today. Copyright © 2016 Institute of Psychoanalysis.

  4. A workbench for tera-flop supercomputing

    International Nuclear Information System (INIS)

    Resch, M.M.; Kuester, U.; Mueller, M.S.; Lang, U.

    2003-01-01

    Supercomputers currently reach a peak performance in the range of TFlop/s. With but one exception - the Japanese Earth Simulator - none of these systems has so far been able to also show a level of sustained performance for a variety of applications that comes close to the peak performance. Sustained TFlop/s are therefore rarely seen. The reasons are manifold and are well known: Bandwidth and latency both for main memory and for the internal network are the key internal technical problems. Cache hierarchies with large caches can bring relief but are no remedy to the problem. However, there are not only technical problems that inhibit the full exploitation by scientists of the potential of modern supercomputers. More and more organizational issues come to the forefront. This paper shows the approach of the High Performance Computing Center Stuttgart (HLRS) to deliver a sustained performance of TFlop/s for a wide range of applications from a large group of users spread over Germany. The core of the concept is the role of the data. Around this we design a simulation workbench that hides the complexity of interacting computers, networks and file systems from the user. (authors)

  5. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Directory of Open Access Journals (Sweden)

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  6. Report: Weiss, Sugar, Dvorak & Dusek, Ltd., Single Audit of the Delta Institute and Affiliates for Year Ended June 30, 2003

    Science.gov (United States)

    Report #2006-S-00004, August 17, 2006. Weiss, Sugar, Dvorak & Dusek, Ltd., did not have sufficient quality control review procedures in place to ensure that all audit work performed was adequately supported by documentary evidence.

  7. KfK-seminar series on supercomputing und visualization from May till September 1992

    International Nuclear Information System (INIS)

    Hohenhinnebusch, W.

    1993-05-01

    During the period of may 1992 to september 1992 a series of seminars was held at KfK on several topics of supercomputing in different fields of application. The aim was to demonstrate the importance of supercomputing and visualization in numerical simulations of complex physical and technical phenomena. This report contains the collection of all submitted seminar papers. (orig./HP) [de

  8. Computational plasma physics and supercomputers. Revision 1

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1985-01-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular models, but parallel processing poses new programming difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematical models

  9. Application of Supercomputer Technologies for Simulation Of Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Vladimir Valentinovich Okrepilov

    2015-06-01

    Full Text Available To date, an extensive experience has been accumulated in investigation of problems related to quality, assessment of management systems, modeling of economic system sustainability. The performed studies have created a basis for development of a new research area — Economics of Quality. Its tools allow to use opportunities of model simulation for construction of the mathematical models adequately reflecting the role of quality in natural, technical, social regularities of functioning of the complex socio-economic systems. Extensive application and development of models, and also system modeling with use of supercomputer technologies, on our deep belief, will bring the conducted research of socio-economic systems to essentially new level. Moreover, the current scientific research makes a significant contribution to model simulation of multi-agent social systems and that is not less important, it belongs to the priority areas in development of science and technology in our country. This article is devoted to the questions of supercomputer technologies application in public sciences, first of all, — regarding technical realization of the large-scale agent-focused models (AFM. The essence of this tool is that owing to the power computer increase it has become possible to describe the behavior of many separate fragments of a difficult system, as socio-economic systems are. The article also deals with the experience of foreign scientists and practicians in launching the AFM on supercomputers, and also the example of AFM developed in CEMI RAS, stages and methods of effective calculating kernel display of multi-agent system on architecture of a modern supercomputer will be analyzed. The experiments on the basis of model simulation on forecasting the population of St. Petersburg according to three scenarios as one of the major factors influencing the development of socio-economic system and quality of life of the population are presented in the

  10. Centralized supercomputer support for magnetic fusion energy research

    International Nuclear Information System (INIS)

    Fuss, D.; Tull, G.G.

    1984-01-01

    High-speed computers with large memories are vital to magnetic fusion energy research. Magnetohydrodynamic (MHD), transport, equilibrium, Vlasov, particle, and Fokker-Planck codes that model plasma behavior play an important role in designing experimental hardware and interpreting the resulting data, as well as in advancing plasma theory itself. The size, architecture, and software of supercomputers to run these codes are often the crucial constraints on the benefits such computational modeling can provide. Hence, vector computers such as the CRAY-1 offer a valuable research resource. To meet the computational needs of the fusion program, the National Magnetic Fusion Energy Computer Center (NMFECC) was established in 1974 at the Lawrence Livermore National Laboratory. Supercomputers at the central computing facility are linked to smaller computer centers at each of the major fusion laboratories by a satellite communication network. In addition to providing large-scale computing, the NMFECC environment stimulates collaboration and the sharing of computer codes and data among the many fusion researchers in a cost-effective manner

  11. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  12. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  13. Tryton Supercomputer Capabilities for Analysis of Massive Data Streams

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2015-09-01

    Full Text Available The recently deployed supercomputer Tryton, located in the Academic Computer Center of Gdansk University of Technology, provides great means for massive parallel processing. Moreover, the status of the Center as one of the main network nodes in the PIONIER network enables the fast and reliable transfer of data produced by miscellaneous devices scattered in the area of the whole country. The typical examples of such data are streams containing radio-telescope and satellite observations. Their analysis, especially with real-time constraints, can be challenging and requires the usage of dedicated software components. We propose a solution for such parallel analysis using the supercomputer, supervised by the KASKADA platform, which with the conjunction with immerse 3D visualization techniques can be used to solve problems such as pulsar detection and chronometric or oil-spill simulation on the sea surface.

  14. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  15. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  16. Guide to dataflow supercomputing basic concepts, case studies, and a detailed example

    CERN Document Server

    Milutinovic, Veljko; Trifunovic, Nemanja; Giorgi, Roberto

    2015-01-01

    This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; revie

  17. Spontaneous and bilateral avascular necrosis of the navicula: Müller-Weiss disease.

    Science.gov (United States)

    Aktaş, Erdem; Ayanoğlu, Tacettin; Hatipoğlu, Yasin; Kanatlı, Ulunay

    2016-12-01

    Although, trauma, foot deformity (pesplanovalgus), systemic diseases such as diabetes mellitus and lupus, drugs (steroids, antineoplastic) and excessive alcohol consumption have all been accused in the etiology of avascular necrosis of the tarsal bones, spontaneous avascular necrosis of the navicular bone, especially in adults, is a rare entity. In this article, we report a 50-year-old female patient with bilateral, spontaneous avascular necrosis of the navicular bone and related severe talonavicular arthrosis. Clinical and radiological findings were concordant with Müller-Weiss disease, which is a rare disease with complex idiopathic foot condition of the adult tarsal navicular bone characterized by progressive navicular fragmentation and talonavicular joint destruction. The patient was successfully treated with two-staged bilateral talonavicular arthrodesis.

  18. A Conditional Curie-Weiss Model for Stylized Multi-group Binary Choice with Social Interaction

    Science.gov (United States)

    Opoku, Alex Akwasi; Edusei, Kwame Owusu; Ansah, Richard Kwame

    2018-04-01

    This paper proposes a conditional Curie-Weiss model as a model for decision making in a stylized society made up of binary decision makers that face a particular dichotomous choice between two options. Following Brock and Durlauf (Discrete choice with social interaction I: theory, 1955), we set-up both socio-economic and statistical mechanical models for the choice problem. We point out when both the socio-economic and statistical mechanical models give rise to the same self-consistent equilibrium mean choice level(s). Phase diagram of the associated statistical mechanical model and its socio-economic implications are discussed.

  19. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  20. Cellular-automata supercomputers for fluid-dynamics modeling

    International Nuclear Information System (INIS)

    Margolus, N.; Toffoli, T.; Vichniac, G.

    1986-01-01

    We report recent developments in the modeling of fluid dynamics, and give experimental results (including dynamical exponents) obtained using cellular automata machines. Because of their locality and uniformity, cellular automata lend themselves to an extremely efficient physical realization; with a suitable architecture, an amount of hardware resources comparable to that of a home computer can achieve (in the simulation of cellular automata) the performance of a conventional supercomputer

  1. The TeraGyroid Experiment – Supercomputing 2003

    Directory of Open Access Journals (Sweden)

    R.J. Blake

    2005-01-01

    Full Text Available Amphiphiles are molecules with hydrophobic tails and hydrophilic heads. When dispersed in solvents, they self assemble into complex mesophases including the beautiful cubic gyroid phase. The goal of the TeraGyroid experiment was to study defect pathways and dynamics in these gyroids. The UK's supercomputing and USA's TeraGrid facilities were coupled together, through a dedicated high-speed network, into a single computational Grid for research work that peaked around the Supercomputing 2003 conference. The gyroids were modeled using lattice Boltzmann methods with parameter spaces explored using many 1283 and 3grid point simulations, this data being used to inform the world's largest three-dimensional time dependent simulation with 10243-grid points. The experiment generated some 2 TBytes of useful data. In terms of Grid technology the project demonstrated the migration of simulations (using Globus middleware to and fro across the Atlantic exploiting the availability of resources. Integration of the systems accelerated the time to insight. Distributed visualisation of the output datasets enabled the parameter space of the interactions within the complex fluid to be explored from a number of sites, informed by discourse over the Access Grid. The project was sponsored by EPSRC (UK and NSF (USA with trans-Atlantic optical bandwidth provided by British Telecommunications.

  2. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  3. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; De, K; Oleynik, D; Jha, S; Wells, J

    2016-01-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  4. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Esteban [University of Pittsburgh; Ni, Xiang [University of Illinois at Urbana-Champaign; Jones, Terry R [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  5. Role of supercomputers in magnetic fusion and energy research programs

    International Nuclear Information System (INIS)

    Killeen, J.

    1985-06-01

    The importance of computer modeling in magnetic fusion (MFE) and energy research (ER) programs is discussed. The need for the most advanced supercomputers is described, and the role of the National Magnetic Fusion Energy Computer Center in meeting these needs is explained

  6. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  7. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  8. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    International Nuclear Information System (INIS)

    Cabrillo, I; Cabellos, L; Marco, J; Fernandez, J; Gonzalez, I

    2014-01-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  9. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    International Nuclear Information System (INIS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-01-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers. (paper)

  10. Plane-wave electronic structure calculations on a parallel supercomputer

    International Nuclear Information System (INIS)

    Nelson, J.S.; Plimpton, S.J.; Sears, M.P.

    1993-01-01

    The development of iterative solutions of Schrodinger's equation in a plane-wave (pw) basis over the last several years has coincided with great advances in the computational power available for performing the calculations. These dual developments have enabled many new and interesting condensed matter phenomena to be studied from a first-principles approach. The authors present a detailed description of the implementation on a parallel supercomputer (hypercube) of the first-order equation-of-motion solution to Schrodinger's equation, using plane-wave basis functions and ab initio separable pseudopotentials. By distributing the plane-waves across the processors of the hypercube many of the computations can be performed in parallel, resulting in decreases in the overall computation time relative to conventional vector supercomputers. This partitioning also provides ample memory for large Fast Fourier Transform (FFT) meshes and the storage of plane-wave coefficients for many hundreds of energy bands. The usefulness of the parallel techniques is demonstrated by benchmark timings for both the FFT's and iterations of the self-consistent solution of Schrodinger's equation for different sized Si unit cells of up to 512 atoms

  11. Problem solving in nuclear engineering using supercomputers

    International Nuclear Information System (INIS)

    Schmidt, F.; Scheuermann, W.; Schatz, A.

    1987-01-01

    The availability of supercomputers enables the engineer to formulate new strategies for problem solving. One such strategy is the Integrated Planning and Simulation System (IPSS). With the integrated systems, simulation models with greater consistency and good agreement with actual plant data can be effectively realized. In the present work some of the basic ideas of IPSS are described as well as some of the conditions necessary to build such systems. Hardware and software characteristics as realized are outlined. (orig.) [de

  12. FPS scientific and supercomputers computers in chemistry

    International Nuclear Information System (INIS)

    Curington, I.J.

    1987-01-01

    FPS Array Processors, scientific computers, and highly parallel supercomputers are used in nearly all aspects of compute-intensive computational chemistry. A survey is made of work utilizing this equipment, both published and current research. The relationship of the computer architecture to computational chemistry is discussed, with specific reference to Molecular Dynamics, Quantum Monte Carlo simulations, and Molecular Graphics applications. Recent installations of the FPS T-Series are highlighted, and examples of Molecular Graphics programs running on the FPS-5000 are shown

  13. Visualizing quantum scattering on the CM-2 supercomputer

    International Nuclear Information System (INIS)

    Richardson, J.L.

    1991-01-01

    We implement parallel algorithms for solving the time-dependent Schroedinger equation on the CM-2 supercomputer. These methods are unconditionally stable as well as unitary at each time step and have the advantage of being spatially local and explicit. We show how to visualize the dynamics of quantum scattering using techniques for visualizing complex wave functions. Several scattering problems are solved to demonstrate the use of these methods. (orig.)

  14. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Wenaus, Torre

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for jo...

  15. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  16. Supercomputer algorithms for reactivity, dynamics and kinetics of small molecules

    International Nuclear Information System (INIS)

    Lagana, A.

    1989-01-01

    Even for small systems, the accurate characterization of reactive processes is so demanding of computer resources as to suggest the use of supercomputers having vector and parallel facilities. The full advantages of vector and parallel architectures can sometimes be obtained by simply modifying existing programs, vectorizing the manipulation of vectors and matrices, and requiring the parallel execution of independent tasks. More often, however, a significant time saving can be obtained only when the computer code undergoes a deeper restructuring, requiring a change in the computational strategy or, more radically, the adoption of a different theoretical treatment. This book discusses supercomputer strategies based upon act and approximate methods aimed at calculating the electronic structure and the reactive properties of small systems. The book shows how, in recent years, intense design activity has led to the ability to calculate accurate electronic structures for reactive systems, exact and high-level approximations to three-dimensional reactive dynamics, and to efficient directive and declaratory software for the modelling of complex systems

  17. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  18. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Supercomputing plays a major role in many areas of science and engineering, and it has had tremendous impact for decades in areas such as aerospace, defense, energy,...

  19. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  20. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  1. Intelligent Personal Supercomputer for Solving Scientific and Technical Problems

    Directory of Open Access Journals (Sweden)

    Khimich, O.M.

    2016-09-01

    Full Text Available New domestic intellіgent personal supercomputer of hybrid architecture Inparkom_pg for the mathematical modeling of processes in the defense industry, engineering, construction, etc. was developed. Intelligent software for the automatic research and tasks of computational mathematics with approximate data of different structures was designed. Applied software to provide mathematical modeling problems in construction, welding and filtration processes was implemented.

  2. Supercomputers and the future of computational atomic scattering physics

    International Nuclear Information System (INIS)

    Younger, S.M.

    1989-01-01

    The advent of the supercomputer has opened new vistas for the computational atomic physicist. Problems of hitherto unparalleled complexity are now being examined using these new machines, and important connections with other fields of physics are being established. This talk briefly reviews some of the most important trends in computational scattering physics and suggests some exciting possibilities for the future. 7 refs., 2 figs

  3. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk (Kitware, Inc., Clifton Park, NY); Fabian, Nathan; Marion, Patrick (Kitware, Inc., Clifton Park, NY); Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  4. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen-Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2018-05-15

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time and supports DMA functionality allowing for parallel processing message-passing.

  5. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 2

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  6. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 1

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  7. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-05-15

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation.

  8. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    International Nuclear Information System (INIS)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho

    2008-01-01

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation

  9. Design and performance characterization of electronic structure calculations on massively parallel supercomputers

    DEFF Research Database (Denmark)

    Romero, N. A.; Glinsvad, Christian; Larsen, Ask Hjorth

    2013-01-01

    Density function theory (DFT) is the most widely employed electronic structure method because of its favorable scaling with system size and accuracy for a broad range of molecular and condensed-phase systems. The advent of massively parallel supercomputers has enhanced the scientific community...

  10. Asymptotic densities from the modified Montroll-Weiss equation for coupled CTRWs

    Science.gov (United States)

    Aghion, Erez; Kessler, David A.; Barkai, Eli

    2018-01-01

    We examine the bi-scaling behavior of Lévy walks with nonlinear coupling, where χ, the particle displacement during each step, is coupled to the duration of the step, τ, by χ τβ. An example of such a process is regular Lévy walks, where β = 1. In recent years such processes were shown to be highly useful for analysis of a class of Langevin dynamics, in particular a system of Sisyphus laser-cooled atoms in an optical lattice, where β = 3/2. We discuss the well-known decoupling approximation used to describe the central part of the particles' position distribution, and use the recently introduced infinite-covariant density approach to study the large fluctuations. Since the density of the step displacements is fat-tailed, the last travel event must be treated with care for the latter. This effect requires a modification of the Montroll-Weiss equation, an equation which has proved important for the analysis of many microscopic models. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  11. Computational Science with the Titan Supercomputer: Early Outcomes and Lessons Learned

    Science.gov (United States)

    Wells, Jack

    2014-03-01

    Modeling and simulation with petascale computing has supercharged the process of innovation and understanding, dramatically accelerating time-to-insight and time-to-discovery. This presentation will focus on early outcomes from the Titan supercomputer at the Oak Ridge National Laboratory. Titan has over 18,000 hybrid compute nodes consisting of both CPUs and GPUs. In this presentation, I will discuss the lessons we have learned in deploying Titan and preparing applications to move from conventional CPU architectures to a hybrid machine. I will present early results of materials applications running on Titan and the implications for the research community as we prepare for exascale supercomputer in the next decade. Lastly, I will provide an overview of user programs at the Oak Ridge Leadership Computing Facility with specific information how researchers may apply for allocations of computing resources. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  12. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00300320; Klimentov, Alexei; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Vaniachine, Alexandre; Wenaus, Torre; Schovancova, Jaroslava

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real time, information about unused...

  13. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration; Klimentov, Alexei; Oleynik, Danila; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently uses more than 100,000 cores at well over 100 Grid sites with a peak performance of 0.3 petaFLOPS, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real tim...

  14. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    Science.gov (United States)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  15. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    Science.gov (United States)

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  16. Explaining the gap between theoretical peak performance and real performance for supercomputer architectures

    International Nuclear Information System (INIS)

    Schoenauer, W.; Haefner, H.

    1993-01-01

    The basic architectures of vector and parallel computers with their properties are presented. Then the memory size and the arithmetic operations in the context of memory bandwidth are discussed. For the exemplary discussion of a single operation micro-measurements of the vector triad for the IBM 3090 VF and the CRAY Y-MP/8 are presented. They reveal the details of the losses for a single operation. Then we analyze the global performance of a whole supercomputer by identifying reduction factors that bring down the theoretical peak performance to the poor real performance. The responsibilities of the manufacturer and of the user for these losses are dicussed. Then the price-performance ratio for different architectures in a snapshot of January 1991 is briefly mentioned. Finally some remarks to a user-friendly architecture for a supercomputer will be made. (orig.)

  17. The Weiss Functional Impairment Rating Scale-Parent Form for assessing ADHD: evaluating diagnostic accuracy and determining optimal thresholds using ROC analysis

    OpenAIRE

    Thompson, Trevor; Lloyd, Andrew; Joseph, Alain; Weiss, Margaret

    2017-01-01

    Purpose The Weiss Functional Impairment Rating Scale-Parent Form (WFIRS-P) is a 50-item scale that assesses functional impairment on six clinically relevant domains typically affected in attention-deficit/hyperactivity disorder (ADHD). As functional impairment is central to ADHD, the WFIRS-P offers potential as a tool for assessing functional impairment in ADHD. These analyses were designed to examine the overall performance of WFIRS-P in differentiating ADHD and non-ADHD cases using receiver...

  18. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-01-01

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  19. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  20. « Ni bas-bleu, ni pot-au-feu » : la conception de « la » femme selon Augusta Moll-Weiss (France, tournant des XIXe-XXe siècles

    Directory of Open Access Journals (Sweden)

    Sandrine Roll

    2009-12-01

    Full Text Available Cet article analyse les idées et l’œuvre de la directrice de l’École des mères, Augusta Moll-Weiss. Dans le contexte du tournant des XIXe-XXe siècles où l’éloge de la ménagère est un thème récurrent des discours moralisateurs émanant des ouvriers ou des bourgeois, Augusta Moll-Weiss met en place un projet de cours ménagers. Son œuvre offre une approche singulière de ce que des activités reconnues comme typiquement féminines peuvent offrir aux femmes. Loin de former uniquement des « fées du logis », les cours professés à l’École des mères donnent aussi aux élèves la possibilité de se préparer à une carrière professionnelle. Augusta Moll-Weiss imagine alors pléthore de débouchés qui placent les activités « du souci des autres » dans la sphère du travail social. Son engagement en faveur de l’enseignement ménager s’accompagne d’une réflexion sur de nouveaux modèles domestiques susceptibles d’accompagner l’entrée des femmes sur le marché du travail. La rationalisation et le partage sexué des tâches domestiques ainsi que la question du temps partiel sont au cœur de son projet. Aux marges de la philanthropie et du féminisme, Augusta Moll-Weiss s’engage donc dans une stratégie de reconnaissance du rôle des femmes dans l’espace civique. Après avoir présenté l’œuvre scolaire de cette femme, cet article s’intéresse à sa vision de la « Ménagère nouvelle » et à son approche du féminisme.This article explores the ideas and works of the Augusta Moll-Weiss, head of the École des mères. The latter started housewifery lessons at the turn of the 19th century at a moment when public discourse among both working and upper-class reformers sang the praises of the housewife. Her work offered a new approach to these typically feminine activities. Far from training only “angels of the home”, the lessons given at the Mothers’ School also offered students the possibility of

  1. An efficient implementation of a backpropagation learning algorithm on quadrics parallel supercomputer

    International Nuclear Information System (INIS)

    Taraglio, S.; Massaioli, F.

    1995-08-01

    A parallel implementation of a library to build and train Multi Layer Perceptrons via the Back Propagation algorithm is presented. The target machine is the SIMD massively parallel supercomputer Quadrics. Performance measures are provided on three different machines with different number of processors, for two network examples. A sample source code is given

  2. Supercomputing Centers and Electricity Service Providers

    DEFF Research Database (Denmark)

    Patki, Tapasya; Bates, Natalie; Ghatikar, Girish

    2016-01-01

    from a detailed, quantitative survey-based analysis and compare the perspectives of the European grid and SCs to the ones of the United States (US). We then show that contrary to the expectation, SCs in the US are more open toward cooperating and developing demand-management strategies with their ESPs......Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates...... this problem. In order to develop a symbiotic relationship between the SCs and their ESPs and to support effective power management at all levels, it is critical to understand and analyze how the existing relationships were formed and how these are expected to evolve. In this paper, we first present results...

  3. Integration of PanDA workload management system with Titan supercomputer at OLCF

    Science.gov (United States)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  4. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  5. Supercomputers and the mathematical modeling of high complexity problems

    International Nuclear Information System (INIS)

    Belotserkovskii, Oleg M

    2010-01-01

    This paper is a review of many works carried out by members of our scientific school in past years. The general principles of constructing numerical algorithms for high-performance computers are described. Several techniques are highlighted and these are based on the method of splitting with respect to physical processes and are widely used in computing nonlinear multidimensional processes in fluid dynamics, in studies of turbulence and hydrodynamic instabilities and in medicine and other natural sciences. The advances and developments related to the new generation of high-performance supercomputing in Russia are presented.

  6. The Pawsey Supercomputer geothermal cooling project

    Science.gov (United States)

    Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.

    2010-12-01

    The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air

  7. Heat dissipation computations of a HVDC ground electrode using a supercomputer

    International Nuclear Information System (INIS)

    Greiss, H.; Mukhedkar, D.; Lagace, P.J.

    1990-01-01

    This paper reports on the temperature, of soil surrounding a High Voltage Direct Current (HVDC) toroidal ground electrode of practical dimensions, in both homogeneous and non-homogeneous soils that was computed at incremental points in time using finite difference methods on a supercomputer. Curves of the response were computed and plotted at several locations within the soil in the vicinity of the ground electrode for various values of the soil parameters

  8. Argonne National Lab deploys Force10 networks' massively dense ethernet switch for supercomputing cluster

    CERN Multimedia

    2003-01-01

    "Force10 Networks, Inc. today announced that Argonne National Laboratory (Argonne, IL) has successfully deployed Force10 E-Series switch/routers to connect to the TeraGrid, the world's largest supercomputing grid, sponsored by the National Science Foundation (NSF)" (1/2 page).

  9. A supercomputing application for reactors core design and optimization

    International Nuclear Information System (INIS)

    Hourcade, Edouard; Gaudier, Fabrice; Arnaud, Gilles; Funtowiez, David; Ammar, Karim

    2010-01-01

    Advanced nuclear reactor designs are often intuition-driven processes where designers first develop or use simplified simulation tools for each physical phenomenon involved. Through the project development, complexity in each discipline increases and implementation of chaining/coupling capabilities adapted to supercomputing optimization process are often postponed to a further step so that task gets increasingly challenging. In the context of renewal in reactor designs, project of first realization are often run in parallel with advanced design although very dependant on final options. As a consequence, the development of tools to globally assess/optimize reactor core features, with the on-going design methods accuracy, is needed. This should be possible within reasonable simulation time and without advanced computer skills needed at project management scale. Also, these tools should be ready to easily cope with modeling progresses in each discipline through project life-time. An early stage development of multi-physics package adapted to supercomputing is presented. The URANIE platform, developed at CEA and based on the Data Analysis Framework ROOT, is very well adapted to this approach. It allows diversified sampling techniques (SRS, LHS, qMC), fitting tools (neuronal networks...) and optimization techniques (genetic algorithm). Also data-base management and visualization are made very easy. In this paper, we'll present the various implementing steps of this core physics tool where neutronics, thermo-hydraulics, and fuel mechanics codes are run simultaneously. A relevant example of optimization of nuclear reactor safety characteristics will be presented. Also, flexibility of URANIE tool will be illustrated with the presentation of several approaches to improve Pareto front quality. (author)

  10. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  11. Generalization of Wilemski-Fixman-Weiss decoupling approximation to the case involving multiple sinks of different sizes, shapes, and reactivities.

    Science.gov (United States)

    Uhm, Jesik; Lee, Jinuk; Eun, Changsun; Lee, Sangyoub

    2006-08-07

    We generalize the Wilemski-Fixman-Weiss decoupling approximation to calculate the transient rate of absorption of point particles into multiple sinks of different sizes, shapes, and reactivities. As an application we consider the case involving two spherical sinks. We obtain a Laplace-transform expression for the transient rate that is in excellent agreement with computer simulations. The long-time steady-state rate has a relatively simple expression, which clearly shows the dependence on the diffusion constant of the particles and on the sizes and reactivities of sinks, and its numerical result is in good agreement with the known exact result that is given in terms of recursion relations.

  12. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    Science.gov (United States)

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  13. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    Science.gov (United States)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  14. Quantum Hamiltonian Physics with Supercomputers

    International Nuclear Information System (INIS)

    Vary, James P.

    2014-01-01

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed

  15. Quantum Hamiltonian Physics with Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Vary, James P.

    2014-06-15

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed.

  16. Coherent 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an Optimal Supercomputer Optical Switch Fabric

    DEFF Research Database (Denmark)

    Karinou, Fotini; Borkowski, Robert; Zibar, Darko

    2013-01-01

    We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates.......We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates....

  17. Cooperative visualization and simulation in a supercomputer environment

    International Nuclear Information System (INIS)

    Ruehle, R.; Lang, U.; Wierse, A.

    1993-01-01

    The article takes a closer look on the requirements being imposed by the idea to integrate all the components into a homogeneous software environment. To this end several methods for the distribtuion of applications in dependence of certain problem types are discussed. The currently available methods at the University of Stuttgart Computer Center for the distribution of applications are further explained. Finally the aims and characteristics of a European sponsored project, called PAGEIN, are explained, which fits perfectly into the line of developments at RUS. The aim of the project is to experiment with future cooperative working modes of aerospace scientists in a high speed distributed supercomputing environment. Project results will have an impact on the development of real future scientific application environments. (orig./DG)

  18. On the making of a system theory of life: Paul A Weiss and Ludwig von Bertalanffy's conceptual connection.

    Science.gov (United States)

    Drack, Manfred; Apfalter, Wilfried; Pouvreau, David

    2007-12-01

    In this article, we review how two eminent Viennese system thinkers, Paul A Weiss and Ludwig von Bertalanffy, began to develop their own perspectives toward a system theory of life in the 1920s. Their work is especially rooted in experimental biology as performed at the Biologische Versuchsanstalt, as well as in philosophy, and they converge in basic concepts. We underline the conceptual connections of their thinking, among them the organism as an organized system, hierarchical organization, and primary activity. With their system thinking, both biologists shared a strong desire to overcome what they viewed as a "mechanistic" approach in biology. Their interpretations are relevant to the renaissance of system thinking in biology--"systems biology." Unless otherwise noted, all translations are our own.

  19. Feynman diagrams sampling for quantum field theories on the QPACE 2 supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Rappl, Florian

    2016-08-01

    This work discusses the application of Feynman diagram sampling in quantum field theories. The method uses a computer simulation to sample the diagrammatic space obtained in a series expansion. For running large physical simulations powerful computers are obligatory, effectively splitting the thesis in two parts. The first part deals with the method of Feynman diagram sampling. Here the theoretical background of the method itself is discussed. Additionally, important statistical concepts and the theory of the strong force, quantum chromodynamics, are introduced. This sets the context of the simulations. We create and evaluate a variety of models to estimate the applicability of diagrammatic methods. The method is then applied to sample the perturbative expansion of the vertex correction. In the end we obtain the value for the anomalous magnetic moment of the electron. The second part looks at the QPACE 2 supercomputer. This includes a short introduction to supercomputers in general, as well as a closer look at the architecture and the cooling system of QPACE 2. Guiding benchmarks of the InfiniBand network are presented. At the core of this part, a collection of best practices and useful programming concepts are outlined, which enables the development of efficient, yet easily portable, applications for the QPACE 2 system.

  20. Use of high performance networks and supercomputers for real-time flight simulation

    Science.gov (United States)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  1. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  2. A fast random number generator for the Intel Paragon supercomputer

    Science.gov (United States)

    Gutbrod, F.

    1995-06-01

    A pseudo-random number generator is presented which makes optimal use of the architecture of the i860-microprocessor and which is expected to have a very long period. It is therefore a good candidate for use on the parallel supercomputer Paragon XP. In the assembler version, it needs 6.4 cycles for a real∗4 random number. There is a FORTRAN routine which yields identical numbers up to rare and minor rounding discrepancies, and it needs 28 cycles. The FORTRAN performance on other microprocessors is somewhat better. Arguments for the quality of the generator and some numerical tests are given.

  3. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    Science.gov (United States)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  4. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  5. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  6. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-03-29

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  7. Simulation of x-rays in refractive structure by the Monte Carlo method using the supercomputer SKIF

    International Nuclear Information System (INIS)

    Yaskevich, Yu.R.; Kravchenko, O.I.; Soroka, I.I.; Chembrovskij, A.G.; Kolesnik, A.S.; Serikova, N.V.; Petrov, P.V.; Kol'chevskij, N.N.

    2013-01-01

    Software 'Xray-SKIF' for the simulation of the X-rays in refractive structures by the Monte-Carlo method using the supercomputer SKIF BSU are developed. The program generates a large number of rays propagated from a source to the refractive structure. The ray trajectory under assumption of geometrical optics is calculated. Absorption is calculated for each ray inside of refractive structure. Dynamic arrays are used for results of calculation rays parameters, its restore the X-ray field distributions very fast at different position of detector. It was found that increasing the number of processors leads to proportional decreasing of calculation time: simulation of 10 8 X-rays using supercomputer with the number of processors from 1 to 30 run-times equal 3 hours and 6 minutes, respectively. 10 9 X-rays are calculated by software 'Xray-SKIF' which allows to reconstruct the X-ray field after refractive structure with a special resolution of 1 micron. (authors)

  8. ON THE MAKING OF A SYSTEM THEORY OF LIFE: PAUL A WEISS AND LUDWIG VON BERTALANFFY’S CONCEPTUAL CONNECTION

    Science.gov (United States)

    Drack, Manfred; Apfalter, Wilfried; Pouvreau, David

    2010-01-01

    In this article, we review how two eminent Viennese system thinkers, Paul A Weiss and Ludwig von Bertalanffy, began to develop their own perspectives toward a system theory of life in the 1920s. Their work is especially rooted in experimental biology as performed at the Biologische Versuchsanstalt, as well as in philosophy, and they converge in basic concepts. We underline the conceptual connections of their thinking, among them the organism as an organized system, hierarchical organization, and primary activity. With their system thinking, both biologists shared a strong desire to overcome what they viewed as a “mechanistic” approach in biology. Their interpretations are relevant to the renaissance of system thinking in biology—“systems biology.” Unless otherwise noted, all translations are our own. PMID:18217527

  9. Large deviations of the finite-time magnetization of the Curie-Weiss random-field Ising model

    Science.gov (United States)

    Paga, Pierre; Kühn, Reimer

    2017-08-01

    We study the large deviations of the magnetization at some finite time in the Curie-Weiss random field Ising model with parallel updating. While relaxation dynamics in an infinite-time horizon gives rise to unique dynamical trajectories [specified by initial conditions and governed by first-order dynamics of the form mt +1=f (mt) ] , we observe that the introduction of a finite-time horizon and the specification of terminal conditions can generate a host of metastable solutions obeying second-order dynamics. We show that these solutions are governed by a Newtonian-like dynamics in discrete time which permits solutions in terms of both the first-order relaxation ("forward") dynamics and the backward dynamics mt +1=f-1(mt) . Our approach allows us to classify trajectories for a given final magnetization as stable or metastable according to the value of the rate function associated with them. We find that in analogy to the Freidlin-Wentzell description of the stochastic dynamics of escape from metastable states, the dominant trajectories may switch between the two types (forward and backward) of first-order dynamics. Additionally, we show how to compute rate functions when uncertainty in the quenched disorder is introduced.

  10. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; D' Azevedo, Eduardo [ORNL; Philip, Bobby [ORNL; Worley, Patrick H [ORNL

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  11. Plasma turbulence calculations on supercomputers

    International Nuclear Information System (INIS)

    Carreras, B.A.; Charlton, L.A.; Dominguez, N.; Drake, J.B.; Garcia, L.; Leboeuf, J.N.; Lee, D.K.; Lynch, V.E.; Sidikman, K.

    1991-01-01

    Although the single-particle picture of magnetic confinement is helpful in understanding some basic physics of plasma confinement, it does not give a full description. Collective effects dominate plasma behavior. Any analysis of plasma confinement requires a self-consistent treatment of the particles and fields. The general picture is further complicated because the plasma, in general, is turbulent. The study of fluid turbulence is a rather complex field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples back to the fluid. Since the fluid is not a perfect conductor, this turbulence can lead to changes in the topology of the magnetic field structure, causing the magnetic field lines to wander radially. Because the plasma fluid flows along field lines, they carry the particles with them, and this enhances the losses caused by collisions. The changes in topology are critical for the plasma confinement. The study of plasma turbulence and the concomitant transport is a challenging problem. Because of the importance of solving the plasma turbulence problem for controlled thermonuclear research, the high complexity of the problem, and the necessity of attacking the problem with supercomputers, the study of plasma turbulence in magnetic confinement devices is a Grand Challenge problem

  12. Reactive flow simulations in complex geometries with high-performance supercomputing

    International Nuclear Information System (INIS)

    Rehm, W.; Gerndt, M.; Jahn, W.; Vogelsang, R.; Binninger, B.; Herrmann, M.; Olivier, H.; Weber, M.

    2000-01-01

    In this paper, we report on a modern field code cluster consisting of state-of-the-art reactive Navier-Stokes- and reactive Euler solvers that has been developed on vector- and parallel supercomputers at the research center Juelich. This field code cluster is used for hydrogen safety analyses of technical systems, for example, in the field of nuclear reactor safety and conventional hydrogen demonstration plants with fuel cells. Emphasis is put on the assessment of combustion loads, which could result from slow, fast or rapid flames, including transition from deflagration to detonation. As a sample of proof tests, the special tools have been tested for specific tasks, based on the comparison of experimental and numerical results, which are in reasonable agreement. (author)

  13. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  14. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; Kumar, Jitendra [ORNL; Mills, Richard T. [Argonne National Laboratory; Hoffman, Forrest M. [ORNL; Sripathi, Vamsi [Intel Corporation; Hargrove, William Walter [United States Department of Agriculture (USDA), United States Forest Service (USFS)

    2017-09-01

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like the Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.

  15. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  16. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  17. Integration of Titan supercomputer at OLCF with ATLAS Production System

    Science.gov (United States)

    Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to

  18. Use of QUADRICS supercomputer as embedded simulator in emergency management systems

    International Nuclear Information System (INIS)

    Bove, R.; Di Costanzo, G.; Ziparo, A.

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system

  19. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Science.gov (United States)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  20. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Directory of Open Access Journals (Sweden)

    DeTar Carleton

    2018-01-01

    Full Text Available With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  1. SUPERCOMPUTER SIMULATION OF CRITICAL PHENOMENA IN COMPLEX SOCIAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Petrus M.A. Sloot

    2014-09-01

    Full Text Available The paper describes a problem of computer simulation of critical phenomena in complex social systems on a petascale computing systems in frames of complex networks approach. The three-layer system of nested models of complex networks is proposed including aggregated analytical model to identify critical phenomena, detailed model of individualized network dynamics and model to adjust a topological structure of a complex network. The scalable parallel algorithm covering all layers of complex networks simulation is proposed. Performance of the algorithm is studied on different supercomputing systems. The issues of software and information infrastructure of complex networks simulation are discussed including organization of distributed calculations, crawling the data in social networks and results visualization. The applications of developed methods and technologies are considered including simulation of criminal networks disruption, fast rumors spreading in social networks, evolution of financial networks and epidemics spreading.

  2. Lectures in Supercomputational Neurosciences Dynamics in Complex Brain Networks

    CERN Document Server

    Graben, Peter beim; Thiel, Marco; Kurths, Jürgen

    2008-01-01

    Computational Neuroscience is a burgeoning field of research where only the combined effort of neuroscientists, biologists, psychologists, physicists, mathematicians, computer scientists, engineers and other specialists, e.g. from linguistics and medicine, seem to be able to expand the limits of our knowledge. The present volume is an introduction, largely from the physicists' perspective, to the subject matter with in-depth contributions by system neuroscientists. A conceptual model for complex networks of neurons is introduced that incorporates many important features of the real brain, such as various types of neurons, various brain areas, inhibitory and excitatory coupling and the plasticity of the network. The computational implementation on supercomputers, which is introduced and discussed in detail in this book, will enable the readers to modify and adapt the algortihm for their own research. Worked-out examples of applications are presented for networks of Morris-Lecar neurons to model the cortical co...

  3. Combined use of clips and nylon snare ("tulip-bundle") as a rescue endoscopic bleeding control in a mallory-weiss syndrome.

    Science.gov (United States)

    Ivekovic, Hrvoje; Radulovic, Bojana; Jankovic, Suzana; Markos, Pave; Rustemovic, Nadan

    2014-01-01

    Mallory-Weiss syndrome (MWS) accounts for 6-14% of all cases of upper gastrointestinal bleeding. Prognosis of patients with MWS is generally good, with a benign course and rare recurrence of bleeding. However, no strict recommendations exist in regard to the mode of action after a failure of primary endoscopic hemostasis. We report a case of an 83-year-old male with MWS and rebleeding after the initial endoscopic treatment with epinephrine and clips. The final endoscopic control of bleeding was achieved by a combined application of clips and a nylon snare in a "tulip-bundle" fashion. The patient had an uneventful postprocedural clinical course and was discharged from the hospital five days later. To the best of our knowledge, this is the first case report showing the "tulip-bundle" technique as a rescue endoscopic bleeding control in the esophagus.

  4. Rezension von: Alexandra Weiss: Regulation und Politisierung von Geschlechterverhältnissen im fordistischen und postfordistischen Kapitalismus. Münster: Verlag Westfälisches Dampfboot 2012.

    Directory of Open Access Journals (Sweden)

    Heike Kahlert

    2013-12-01

    Full Text Available In ihrer uneingeschränkt lesenswerten Monographie setzt sich Alexandra Weiss mit Stabilität, Wandel und Widersprüchen der Geschlechterordnung in den Transformationen des Kapitalismus am Beispiel Österreichs auseinander. Dabei zeigt sie den Maskulinismus der neoliberalen Idee und Politik auf, analysiert die Re-Traditionalisierung des Politischen im Postfordismus und reflektiert über Handlungsperspektiven feministischer Politik unter den Bedingungen gegenwärtiger Staatlichkeit. Im Zentrum emanzipatorischer (Geschlechter-Politik sieht sie die Re-Etablierung von Gleichheit als Voraussetzung von Freiheit in einer demokratischen Gesellschaft. Die kraftvolle gesellschaftsanalytische Studie überzeugt schließlich durch eine stringente Argumentation und eine erfreulich klare Sprache.

  5. Jackson-Weiss syndrome: Clinical and radiological findings in a large kindred and exclusion of the gene from 7p21 and 5qter

    Energy Technology Data Exchange (ETDEWEB)

    Ades, L.C.; Haan, E.A.; Mulley, J.C.; Senga, I.P.; Morris, L.L.; David, D.J. [Women`s and Children`s Hospital, North Adelaide (Australia)

    1994-06-01

    We describe the clinical and radiological manifestations of the Jackson-Weiss syndrome (JWS) in a large South Australian kindred. Radiological abnormalities not previously described in the hands include coned epiphyses, distal and middle phalangeal hypoplasia, and carpal bone malsegmentation. New radiological findings in the feet include coned epiphyses, hallux valgus, phalangeal, tarso-navicular and calcaneo-navicular fusions, and uniform absence of metatarsal fusions. Absence of linkage to eight markers along the short arm of chromosome 7 excluded allelian between JWS and Saethre-Chotzen syndrome at 7p21. No linkage was detected to D5S211, excluding allelism to another recently described cephalosyndactyly syndrome mapping to 5qter. 35 refs., 5 figs., 4 tabs.

  6. Symbolic simulation of engineering systems on a supercomputer

    International Nuclear Information System (INIS)

    Ragheb, M.; Gvillo, D.; Makowitz, H.

    1986-01-01

    Model-Based Production-Rule systems for analysis are developed for the symbolic simulation of Complex Engineering systems on a CRAY X-MP Supercomputer. The Fault-Tree and Event-Tree Analysis methodologies from Systems-Analysis are used for problem representation and are coupled to the Rule-Based System Paradigm from Knowledge Engineering to provide modelling of engineering devices. Modelling is based on knowledge of the structure and function of the device rather than on human expertise alone. To implement the methodology, we developed a production-Rule Analysis System that uses both backward-chaining and forward-chaining: HAL-1986. The inference engine uses an Induction-Deduction-Oriented antecedent-consequent logic and is programmed in Portable Standard Lisp (PSL). The inference engine is general and can accommodate general modifications and additions to the knowledge base. The methodologies used will be demonstrated using a model for the identification of faults, and subsequent recovery from abnormal situations in Nuclear Reactor Safety Analysis. The use of the exposed methodologies for the prognostication of future device responses under operational and accident conditions using coupled symbolic and procedural programming is discussed

  7. Micro-mechanical Simulations of Soils using Massively Parallel Supercomputers

    Directory of Open Access Journals (Sweden)

    David W. Washington

    2004-06-01

    Full Text Available In this research a computer program, Trubal version 1.51, based on the Discrete Element Method was converted to run on a Connection Machine (CM-5,a massively parallel supercomputer with 512 nodes, to expedite the computational times of simulating Geotechnical boundary value problems. The dynamic memory algorithm in Trubal program did not perform efficiently in CM-2 machine with the Single Instruction Multiple Data (SIMD architecture. This was due to the communication overhead involving global array reductions, global array broadcast and random data movement. Therefore, a dynamic memory algorithm in Trubal program was converted to a static memory arrangement and Trubal program was successfully converted to run on CM-5 machines. The converted program was called "TRUBAL for Parallel Machines (TPM." Simulating two physical triaxial experiments and comparing simulation results with Trubal simulations validated the TPM program. With a 512 nodes CM-5 machine TPM produced a nine-fold speedup demonstrating the inherent parallelism within algorithms based on the Discrete Element Method.

  8. Large scale simulations of lattice QCD thermodynamics on Columbia Parallel Supercomputers

    International Nuclear Information System (INIS)

    Ohta, Shigemi

    1989-01-01

    The Columbia Parallel Supercomputer project aims at the construction of a parallel processing, multi-gigaflop computer optimized for numerical simulations of lattice QCD. The project has three stages; 16-node, 1/4GF machine completed in April 1985, 64-node, 1GF machine completed in August 1987, and 256-node, 16GF machine now under construction. The machines all share a common architecture; a two dimensional torus formed from a rectangular array of N 1 x N 2 independent and identical processors. A processor is capable of operating in a multi-instruction multi-data mode, except for periods of synchronous interprocessor communication with its four nearest neighbors. Here the thermodynamics simulations on the two working machines are reported. (orig./HSI)

  9. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, Clyde [Northern Illinois Univ., DeKalb, IL (United States); Karonis, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Lurio, Laurence [Northern Illinois Univ., DeKalb, IL (United States); Piot, Philippe [Northern Illinois Univ., DeKalb, IL (United States); Xiao, Zhili [Northern Illinois Univ., DeKalb, IL (United States); Glatz, Andreas [Northern Illinois Univ., DeKalb, IL (United States); Pohlman, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Hou, Minmei [Northern Illinois Univ., DeKalb, IL (United States); Demir, Veysel [Northern Illinois Univ., DeKalb, IL (United States); Song, Jie [Northern Illinois Univ., DeKalb, IL (United States); Duffin, Kirk [Northern Illinois Univ., DeKalb, IL (United States); Johns, Mitrick [Northern Illinois Univ., DeKalb, IL (United States); Sims, Thomas [Northern Illinois Univ., DeKalb, IL (United States); Yin, Yanbin [Northern Illinois Univ., DeKalb, IL (United States)

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  10. High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers

    Science.gov (United States)

    Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.

    2017-12-01

    The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and

  11. De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H

    2006-09-04

    We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework, high-end chemically reactive and non-reactive molecular dynamics (MD) simulations explore a wide solution space to discover microscopic mechanisms that govern macroscopic material properties, into which highly accurate quantum mechanical (QM) simulations are embedded to validate the discovered mechanisms and quantify the uncertainty of the solution. The framework includes an embedded divide-and-conquer (EDC) algorithmic framework for the design of linear-scaling simulation algorithms with minimal bandwidth complexity and tight error control. The EDC framework also enables adaptive hierarchical simulation with automated model transitioning assisted by graph-based event tracking. A tunable hierarchical cellular decomposition parallelization framework then maps the O(N) EDC algorithms onto Petaflops computers, while achieving performance tunability through a hierarchy of parameterized cell data/computation structures, as well as its implementation using hybrid Grid remote procedure call + message passing + threads programming. High-end computing platforms such as IBM BlueGene/L, SGI Altix 3000 and the NSF TeraGrid provide an excellent test grounds for the framework. On these platforms, we have achieved unprecedented scales of quantum-mechanically accurate and well validated, chemically reactive atomistic simulations--1.06 billion-atom fast reactive force-field MD and 11.8 million-atom (1.04 trillion grid points) quantum-mechanical MD in the framework of the EDC density functional theory on adaptive multigrids--in addition to 134 billion-atom non-reactive space-time multiresolution MD, with the parallel efficiency as high as 0.998 on 65,536 dual-processor BlueGene/L nodes. We have also achieved an automated execution of hierarchical QM

  12. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Directory of Open Access Journals (Sweden)

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  13. Combined Use of Clips and Nylon Snare (“Tulip-Bundle” as a Rescue Endoscopic Bleeding Control in a Mallory-Weiss Syndrome

    Directory of Open Access Journals (Sweden)

    Hrvoje Ivekovic

    2014-01-01

    Full Text Available Mallory-Weiss syndrome (MWS accounts for 6–14% of all cases of upper gastrointestinal bleeding. Prognosis of patients with MWS is generally good, with a benign course and rare recurrence of bleeding. However, no strict recommendations exist in regard to the mode of action after a failure of primary endoscopic hemostasis. We report a case of an 83-year-old male with MWS and rebleeding after the initial endoscopic treatment with epinephrine and clips. The final endoscopic control of bleeding was achieved by a combined application of clips and a nylon snare in a “tulip-bundle” fashion. The patient had an uneventful postprocedural clinical course and was discharged from the hospital five days later. To the best of our knowledge, this is the first case report showing the “tulip-bundle” technique as a rescue endoscopic bleeding control in the esophagus.

  14. Development of a high performance eigensolver on the peta-scale next generation supercomputer system

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Yamada, Susumu; Machida, Masahiko

    2010-01-01

    For the present supercomputer systems, a multicore and multisocket processors are necessary to build a system, and choice of interconnection is essential. In addition, for effective development of a new code, high performance, scalable, and reliable numerical software is one of the key items. ScaLAPACK and PETSc are well-known software on distributed memory parallel computer systems. It is needless to say that highly tuned software towards new architecture like many-core processors must be chosen for real computation. In this study, we present a high-performance and high-scalable eigenvalue solver towards the next-generation supercomputer system, so called 'K-computer' system. We have developed two versions, the standard version (eigen s) and enhanced performance version (eigen sx), which are developed on the T2K cluster system housed at University of Tokyo. Eigen s employs the conventional algorithms; Householder tridiagonalization, divide and conquer (DC) algorithm, and Householder back-transformation. They are carefully implemented with blocking technique and flexible two-dimensional data-distribution to reduce the overhead of memory traffic and data transfer, respectively. Eigen s performs excellently on the T2K system with 4096 cores (theoretical peak is 37.6 TFLOPS), and it shows fine performance 3.0 TFLOPS with a two hundred thousand dimensional matrix. The enhanced version, eigen sx, uses more advanced algorithms; the narrow-band reduction algorithm, DC for band matrices, and the block Householder back-transformation with WY-representation. Even though this version is still on a test stage, it shows 4.7 TFLOPS with the same dimensional matrix on eigen s. (author)

  15. Palestina kak neudavshejesja gossudarstvo / Shlomo Avineri

    Index Scriptorium Estoniae

    Avineri, Shlomo

    2007-01-01

    Palestiina konfliktist. Autor on seisukohal, et kuigi kriisi põhjustes on kerge süüdistada Palestiina poliitilisi liidreid, Iisraeli okupatsiooni või USA poliitikat, peituvad põhjused palestiinlaste suutmatuses ületada ajaloolisi vastuolusid ning luua ühtne ja kooskõlastatult tegutsev valitsus

  16. Watson will see you now: a supercomputer to help clinicians make informed treatment decisions.

    Science.gov (United States)

    Doyle-Lindrud, Susan

    2015-02-01

    IBM has collaborated with several cancer care providers to develop and train the IBM supercomputer Watson to help clinicians make informed treatment decisions. When a patient is seen in clinic, the oncologist can input all of the clinical information into the computer system. Watson will then review all of the data and recommend treatment options based on the latest evidence and guidelines. Once the oncologist makes the treatment decision, this information can be sent directly to the insurance company for approval. Watson has the ability to standardize care and accelerate the approval process, a benefit to the healthcare provider and the patient.

  17. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    Science.gov (United States)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  18. Adventures in supercomputing: An innovative program for high school teachers

    Energy Technology Data Exchange (ETDEWEB)

    Oliver, C.E.; Hicks, H.R.; Summers, B.G. [Oak Ridge National Lab., TN (United States); Staten, D.G. [Wartburg Central High School, TN (United States)

    1994-12-31

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology. Adventures in Supercomputing (AiS), sponsored by the U.S. Department of Energy (DOE), is such a program. It is a program for high school teachers that changes the teacher paradigm from a teacher-directed approach of teaching to a student-centered approach. {open_quotes}A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode{close_quotes}. Not only is the process of teaching changed, but the cross-curricula integration within the AiS materials is remarkable. Written from a teacher`s perspective, this paper will describe the AiS program and its effects on teachers and students, primarily at Wartburg Central High School, in Wartburg, Tennessee. The AiS program in Tennessee is sponsored by Oak Ridge National Laboratory (ORNL).

  19. Re-inventing electromagnetics - Supercomputing solution of Maxwell's equations via direct time integration on space grids

    International Nuclear Information System (INIS)

    Taflove, A.

    1992-01-01

    This paper summarizes the present state and future directions of applying finite-difference and finite-volume time-domain techniques for Maxwell's equations on supercomputers to model complex electromagnetic wave interactions with structures. Applications so far have been dominated by radar cross section technology, but by no means are limited to this area. In fact, the gains we have made place us on the threshold of being able to make tremendous contributions to non-defense electronics and optical technology. Some of the most interesting research in these commercial areas is summarized. 47 refs

  20. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    Science.gov (United States)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  1. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    Science.gov (United States)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  2. Research center Juelich to install Germany's most powerful supercomputer new IBM System for science and research will achieve 5.8 trillion computations per second

    CERN Multimedia

    2002-01-01

    "The Research Center Juelich, Germany, and IBM today announced that they have signed a contract for the delivery and installation of a new IBM supercomputer at the Central Institute for Applied Mathematics" (1/2 page).

  3. MEGADOCK 4.0: an ultra-high-performance protein-protein docking software for heterogeneous supercomputers.

    Science.gov (United States)

    Ohue, Masahito; Shimoda, Takehiro; Suzuki, Shuji; Matsuzaki, Yuri; Ishida, Takashi; Akiyama, Yutaka

    2014-11-15

    The application of protein-protein docking in large-scale interactome analysis is a major challenge in structural bioinformatics and requires huge computing resources. In this work, we present MEGADOCK 4.0, an FFT-based docking software that makes extensive use of recent heterogeneous supercomputers and shows powerful, scalable performance of >97% strong scaling. MEGADOCK 4.0 is written in C++ with OpenMPI and NVIDIA CUDA 5.0 (or later) and is freely available to all academic and non-profit users at: http://www.bi.cs.titech.ac.jp/megadock. akiyama@cs.titech.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  4. Wavelet transform-vector quantization compression of supercomputer ocean model simulation output

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J N; Brislawn, C M

    1992-11-12

    We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.

  5. Computational fluid dynamics: complex flows requiring supercomputers. January 1975-July 1988 (Citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Report for January 1975-July 1988

    International Nuclear Information System (INIS)

    1988-08-01

    This bibliography contains citations concerning computational fluid dynamics (CFD), a new method in computational science to perform complex flow simulations in three dimensions. Applications include aerodynamic design and analysis for aircraft, rockets, and missiles, and automobiles; heat-transfer studies; and combustion processes. Included are references to supercomputers, array processors, and parallel processors where needed for complete, integrated design. Also included are software packages and grid-generation techniques required to apply CFD numerical solutions. Numerical methods for fluid dynamics, not requiring supercomputers, are found in a separate published search. (Contains 83 citations fully indexed and including a title list.)

  6. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  7. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu

    2013-07-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  8. Research to application: Supercomputing trends for the 90's - Opportunities for interdisciplinary computations

    International Nuclear Information System (INIS)

    Shankar, V.

    1991-01-01

    The progression of supercomputing is reviewed from the point of view of computational fluid dynamics (CFD), and multidisciplinary problems impacting the design of advanced aerospace configurations are addressed. The application of full potential and Euler equations to transonic and supersonic problems in the 70s and early 80s is outlined, along with Navier-Stokes computations widespread during the late 80s and early 90s. Multidisciplinary computations currently in progress are discussed, including CFD and aeroelastic coupling for both static and dynamic flexible computations, CFD, aeroelastic, and controls coupling for flutter suppression and active control, and the development of a computational electromagnetics technology based on CFD methods. Attention is given to computational challenges standing in a way of the concept of establishing a computational environment including many technologies. 40 refs

  9. Car2x with software defined networks, network functions virtualization and supercomputers technical and scientific preparations for the Amsterdam Arena telecoms fieldlab

    NARCIS (Netherlands)

    Meijer R.J.; Cushing R.; De Laat C.; Jackson P.; Klous S.; Koning R.; Makkes M.X.; Meerwijk A.

    2015-01-01

    In the invited talk 'Car2x with SDN, NFV and supercomputers' we report about how our past work with SDN [1, 2] allows the design of a smart mobility fieldlab in the huge parking lot the Amsterdam Arena. We explain how we can engineer and test software that handle the complex conditions of the Car2X

  10. A user-friendly web portal for T-Coffee on supercomputers

    Directory of Open Access Journals (Sweden)

    Koetsier Jos

    2011-05-01

    Full Text Available Abstract Background Parallel T-Coffee (PTC was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  11. Portable implementation model for CFD simulations. Application to hybrid CPU/GPU supercomputers

    Science.gov (United States)

    Oyarzun, Guillermo; Borrell, Ricard; Gorobets, Andrey; Oliva, Assensi

    2017-10-01

    Nowadays, high performance computing (HPC) systems experience a disruptive moment with a variety of novel architectures and frameworks, without any clarity of which one is going to prevail. In this context, the portability of codes across different architectures is of major importance. This paper presents a portable implementation model based on an algebraic operational approach for direct numerical simulation (DNS) and large eddy simulation (LES) of incompressible turbulent flows using unstructured hybrid meshes. The strategy proposed consists in representing the whole time-integration algorithm using only three basic algebraic operations: sparse matrix-vector product, a linear combination of vectors and dot product. The main idea is based on decomposing the nonlinear operators into a concatenation of two SpMV operations. This provides high modularity and portability. An exhaustive analysis of the proposed implementation for hybrid CPU/GPU supercomputers has been conducted with tests using up to 128 GPUs. The main objective consists in understanding the challenges of implementing CFD codes on new architectures.

  12. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    International Nuclear Information System (INIS)

    Delbecq, J.M.; Banner, D.

    2003-01-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  13. SOFTWARE FOR SUPERCOMPUTER SKIF “ProLit-lC” and “ProNRS-lC” FOR FOUNDRY AND METALLURGICAL PRODUCTIONS

    Directory of Open Access Journals (Sweden)

    A. N. Chichko

    2008-01-01

    Full Text Available The data of modeling on supercomputer system SKIF of technological process of  molds filling by means of computer system 'ProLIT-lc', and also data of modeling of the steel pouring process by means ofTroNRS-lc'are presented. The influence of number of  processors of  multinuclear computer system SKIF on acceleration and time of  modeling of technological processes, connected with production of castings and slugs, is shown.

  14. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  15. Distinctive aspects of peptic ulcer disease, Dieulafoy's lesion, and Mallory-Weiss syndrome in patients with advanced alcoholic liver disease or cirrhosis

    Science.gov (United States)

    Nojkov, Borko; Cappell, Mitchell S

    2016-01-01

    AIM: To systematically review the data on distinctive aspects of peptic ulcer disease (PUD), Dieulafoy’s lesion (DL), and Mallory-Weiss syndrome (MWS) in patients with advanced alcoholic liver disease (aALD), including alcoholic hepatitis or alcoholic cirrhosis. METHODS: Computerized literature search performed via PubMed using the following medical subject heading terms and keywords: “alcoholic liver disease”, “alcoholic hepatitis”,“ alcoholic cirrhosis”, “cirrhosis”, “liver disease”, “upper gastrointestinal bleeding”, “non-variceal upper gastrointestinal bleeding”, “PUD”, ‘‘DL’’, ‘‘Mallory-Weiss tear”, and “MWS’’. RESULTS: While the majority of acute gastrointestinal (GI) bleeding with aALD is related to portal hypertension, about 30%-40% of acute GI bleeding in patients with aALD is unrelated to portal hypertension. Such bleeding constitutes an important complication of aALD because of its frequency, severity, and associated mortality. Patients with cirrhosis have a markedly increased risk of PUD, which further increases with the progression of cirrhosis. Patients with cirrhosis or aALD and peptic ulcer bleeding (PUB) have worse clinical outcomes than other patients with PUB, including uncontrolled bleeding, rebleeding, and mortality. Alcohol consumption, nonsteroidal anti-inflammatory drug use, and portal hypertension may have a pathogenic role in the development of PUD in patients with aALD. Limited data suggest that Helicobacter pylori does not play a significant role in the pathogenesis of PUD in most cirrhotic patients. The frequency of bleeding from DL appears to be increased in patients with aALD. DL may be associated with an especially high mortality in these patients. MWS is strongly associated with heavy alcohol consumption from binge drinking or chronic alcoholism, and is associated with aALD. Patients with aALD have more severe MWS bleeding and are more likely to rebleed when compared to non

  16. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    Science.gov (United States)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  17. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Helland, B. [Ames Lab., IA (United States); Summers, B.G. [Oak Ridge National Lab., TN (United States)

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. The data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.

  18. The BlueGene/L Supercomputer and Quantum ChromoDynamics

    International Nuclear Information System (INIS)

    Vranas, P; Soltz, R

    2006-01-01

    In summary our update contains: (1) Perfect speedup sustaining 19.3% of peak for the Wilson D D-slash Dirac operator. (2) Measurements of the full Conjugate Gradient (CG) inverter that inverts the Dirac operator. The CG inverter contains two global sums over the entire machine. Nevertheless, our measurements retain perfect speedup scaling demonstrating the robustness of our methods. (3) We ran on the largest BG/L system, the LLNL 64 rack BG/L supercomputer, and obtained a sustained speed of 59.1 TFlops. Furthermore, the speedup scaling of the Dirac operator and of the CG inverter are perfect all the way up to the full size of the machine, 131,072 cores (please see Figure II). The local lattice is rather small (4 x 4 x 4 x 16) while the total lattice has been a lattice QCD vision for thermodynamic studies (a total of 128 x 128 x 256 x 32 lattice sites). This speed is about five times larger compared to the speed we quoted in our submission. As we have pointed out in our paper QCD is notoriously sensitive to network and memory latencies, has a relatively high communication to computation ratio which can not be overlapped in BGL in virtual node mode, and as an application is in a class of its own. The above results are thrilling to us and a 30 year long dream for lattice QCD

  19. Modeling radiative transport in ICF plasmas on an IBM SP2 supercomputer

    International Nuclear Information System (INIS)

    Johansen, J.A.; MacFarlane, J.J.; Moses, G.A.

    1995-01-01

    At the University of Wisconsin-Madison the authors have integrated a collisional-radiative-equilibrium model into their CONRAD radiation-hydrodynamics code. This integrated package allows them to accurately simulate the transport processes involved in ICF plasmas; including the important effects of self-absorption of line-radiation. However, as they increase the amount of atomic structure utilized in their transport models, the computational demands increase nonlinearly. In an attempt to meet this increased computational demand, they have recently embarked on a mission to parallelize the CONRAD program. The parallel CONRAD development is being performed on an IBM SP2 supercomputer. The parallelism is based on a message passing paradigm, and is being implemented using PVM. At the present time they have determined that approximately 70% of the sequential program can be executed in parallel. Accordingly, they expect that the parallel version will yield a speedup on the order of three times that of the sequential version. This translates into only 10 hours of execution time for the parallel version, whereas the sequential version required 30 hours

  20. Solving sparse linear least squares problems on some supercomputers by using large dense blocks

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Ostromsky, T; Sameh, A

    1997-01-01

    technique is preferable to sparse matrix technique when the matrices are not large, because the high computational speed compensates fully the disadvantages of using more arithmetic operations and more storage. For very large matrices the computations must be organized as a sequence of tasks in each......Efficient subroutines for dense matrix computations have recently been developed and are available on many high-speed computers. On some computers the speed of many dense matrix operations is near to the peak-performance. For sparse matrices storage and operations can be saved by operating only...... and storing only nonzero elements. However, the price is a great degradation of the speed of computations on supercomputers (due to the use of indirect addresses, to the need to insert new nonzeros in the sparse storage scheme, to the lack of data locality, etc.). On many high-speed computers a dense matrix...

  1. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Directory of Open Access Journals (Sweden)

    De K.

    2016-01-01

    Full Text Available The Large Hadron Collider (LHC, operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed AnalysisWorkload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF, is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF’s Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  2. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  3. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M.; Banner, D. [Electricite de France (EDF)- R and D Division, 92 - Clamart (France)

    2003-07-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  4. Performance Evaluation of an Intel Haswell- and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

    Science.gov (United States)

    Saini, Subhash; Hood, Robert T.; Chang, Johnny; Baron, John

    2016-01-01

    We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5- 2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.

  5. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Swaminarayan, Sriram [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Kadau, Kai [Los Alamos National Laboratory; Fossum, Gordon C [IBM CORPORATION

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  6. Molecular Weiss domain polarization in piezoceramics to diaphragm, cantilever and channel construction in low-temperature-cofired ceramics for micro-fluidic applications

    International Nuclear Information System (INIS)

    Khanna, P.K.; Ahmad, S.; Grimme, R.

    2005-01-01

    This paper presents the efforts made to study the process of comminution to Weiss domain polarization and phase transition in piezoceramics together with the versatility of low-temperature-cofired ceramics-based devices and components for their ready adoption for typical applications in the area of micro-fluidics. A conceptual micro-fluidic module has been presented and few unit entities necessary for its realization have been described. The purpose of these entities is to position the sensors and actuators by using piezoelectric materials. Investigations are performed to make useful constructions like diaphragms and cantilevers for laying the sensing elements, cavities for burying the electronic chip devices, and channels for fluid transportation. In order to realize these constructions, the basic step involves machining of circular, straight line, rectangular and square-shaped structure in the green ceramic tapes followed by lamination and firing with post-machining in some cases. The diaphragm and cavity includes one or more un-machined layer stacked together with several machined layers with rectangular or square slits. The cantilever is an extension of the diaphragm creation process with inclusion of a post-machining step. The channel essentially consists of a machined green ceramic layer sandwiched between an un-machined and a partially machined layer. The fabrication for all the above constructions has been exemplified and the details have been discussed

  7. New Mexico High School Supercomputing Challenge, 1990--1995: Five years of making a difference to students, teachers, schools, and communities. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Foster, M.; Kratzer, D.

    1996-02-01

    The New Mexico High School Supercomputing Challenge is an academic program dedicated to increasing interest in science and math among high school students by introducing them to high performance computing. This report provides a summary and evaluation of the first five years of the program, describes the program and shows the impact that it has had on high school students, their teachers, and their communities. Goals and objectives are reviewed and evaluated, growth and development of the program are analyzed, and future directions are discussed.

  8. Parallel simulation of tsunami inundation on a large-scale supercomputer

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the

  9. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  10. A criticality safety analysis code using a vectorized Monte Carlo method on the HITAC S-810 supercomputer

    International Nuclear Information System (INIS)

    Morimoto, Y.; Maruyama, H.

    1987-01-01

    A vectorized Monte Carlo criticality safety analysis code has been developed on the vector supercomputer HITAC S-810. In this code, a multi-particle tracking algorithm was adopted for effective utilization of the vector processor. A flight analysis with pseudo-scattering was developed to reduce the computational time needed for flight analysis, which represents the bulk of computational time. This new algorithm realized a speed-up of factor 1.5 over the conventional flight analysis. The code also adopted the multigroup cross section constants library of the Bodarenko type with 190 groups, with 132 groups being for fast and epithermal regions and 58 groups being for the thermal region. Evaluation work showed that this code reproduce the experimental results to an accuracy of about 1 % for the effective neutron multiplication factor. (author)

  11. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  12. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone; Manzano Franco, Joseph B.

    2012-12-31

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, we introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.

  13. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan; Mills, Richard T.

    2012-04-18

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors per realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.

  14. Psychometric validation of the Weiss Functional Impairment Rating Scale-Parent Report Form in children and adolescents with attention-deficit/hyperactivity disorder.

    Science.gov (United States)

    Gajria, Kavita; Kosinski, Mark; Sikirica, Vanja; Huss, Michael; Livote, Elayne; Reilly, Kathleen; Dittmann, Ralf W; Erder, M Haim

    2015-11-17

    Measurement properties of the Weiss Functional Impairment Rating Scale-Parent Report Form (WFIRS-P), which assesses attention-deficit/hyperactivity disorder (ADHD)-related functional impairment in children/adolescents (6-17 years), were examined. Data from seven randomized, controlled trials were pooled. Analyses were conducted in two random half-samples. WFIRS-P conceptual framework was evaluated using confirmatory factor analyses (CFA). Reliability was estimated using internal consistency (Cronbach's alpha) and test-retest reliability methods. Convergent validity was assessed using correlations between WFIRS-P domain scores and the ADHD-RS-IV and Clinical Global Impression-Severity (CGI-S) scales. Responsiveness was tested by comparing mean changes in WFIRS-P domain scores between responders and non-responders based on clinical criteria. CFA adequately confirmed the item-to-scale relationships defined in the WFIRS-P conceptual framework. Cronbach's alpha coefficient exceeded 0.7 for all domains and test-retest reliability exceeded 0.7 for all but Risky Activities. With few exceptions, WFIRS-P domains correlated significantly (p ADHD-RS-IV Total, Inattention and Hyperactivity-Impulsivity scores and CGI-S at baseline and follow-up in both random half-samples. Mean changes in WFIRS-P domain scores differed significantly between responder and non-responder groups in the expected direction (p < 0.001). Study results support the reliability, validity and responsiveness of the WFIRS-P. Findings were replicated between two random samples, further demonstrating the robustness of results.

  15. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    Science.gov (United States)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are

  16. Evaluating the networking characteristics of the Cray XC-40 Intel Knights Landing-based Cori supercomputer at NERSC

    Energy Technology Data Exchange (ETDEWEB)

    Doerfler, Douglas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Austin, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cook, Brandon [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kandalla, Krishna [Cray Inc, Bloomington, MN (United States); Mendygral, Peter [Cray Inc, Bloomington, MN (United States)

    2017-09-12

    There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL, such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.

  17. Mallory-Weiss tear

    Science.gov (United States)

    ... the liver and problems with blood clotting make future bleeding episodes more likely to occur. ... Jensen DM. Gastrointestinal hemorrhage. In: Goldman L, Schafer AI, eds. Goldman-Cecil Medicine . 25th ed. Philadelphia, PA: ...

  18. Using the LANSCE irradiation facility to predict the number of fatal soft errors in one of the world's fastest supercomputers

    International Nuclear Information System (INIS)

    Michalak, S.E.; Harris, K.W.; Hengartner, N.W.; Takala, B.E.; Wender, S.A.

    2005-01-01

    Los Alamos National Laboratory (LANL) is home to the Los Alamos Neutron Science Center (LANSCE). LANSCE is a unique facility because its neutron spectrum closely mimics the neutron spectrum at terrestrial and aircraft altitudes, but is many times more intense. Thus, LANSCE provides an ideal setting for accelerated testing of semiconductor and other devices that are susceptible to cosmic ray induced neutrons. Many industrial companies use LANSCE to estimate device susceptibility to cosmic ray induced neutrons, and it has also been used to test parts from one of LANL's supercomputers, the ASC (Advanced Simulation and Computing Program) Q. This paper discusses our use of the LANSCE facility to study components in Q including a comparison with failure data from Q

  19. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  20. Visualization at supercomputing centers: the tale of little big iron and the three skinny guys.

    Science.gov (United States)

    Bethel, E W; van Rosendale, J; Southard, D; Gaither, K; Childs, H; Brugger, E; Ahern, S

    2011-01-01

    Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources-the "Big Iron." Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be-that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?

  1. The design and implementation of cost-effective algorithms for direct solution of banded linear systems on the vector processor system 32 supercomputer

    Science.gov (United States)

    Samba, A. S.

    1985-01-01

    The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.

  2. Sandia`s network for Supercomputing `94: Linking the Los Alamos, Lawrence Livermore, and Sandia National Laboratories using switched multimegabit data service

    Energy Technology Data Exchange (ETDEWEB)

    Vahle, M.O.; Gossage, S.A.; Brenkosh, J.P. [Sandia National Labs., Albuquerque, NM (United States). Advanced Networking Integration Dept.

    1995-01-01

    Supercomputing `94, a high-performance computing and communications conference, was held November 14th through 18th, 1994 in Washington DC. For the past four years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1994 conference, Sandia built a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second linking its private SMDS network between its facilities in Albuquerque, New Mexico and Livermore, California to the convention center in Washington, D.C. For the show, the network was also extended from Sandia, New Mexico to Los Alamos National Laboratory and from Sandia, California to Lawrence Livermore National Laboratory. This paper documents and describes this network and how it was used at the conference.

  3. The Weiss Functional Impairment Rating Scale-Parent Form for assessing ADHD: evaluating diagnostic accuracy and determining optimal thresholds using ROC analysis.

    Science.gov (United States)

    Thompson, Trevor; Lloyd, Andrew; Joseph, Alain; Weiss, Margaret

    2017-07-01

    The Weiss Functional Impairment Rating Scale-Parent Form (WFIRS-P) is a 50-item scale that assesses functional impairment on six clinically relevant domains typically affected in attention-deficit/hyperactivity disorder (ADHD). As functional impairment is central to ADHD, the WFIRS-P offers potential as a tool for assessing functional impairment in ADHD. These analyses were designed to examine the overall performance of WFIRS-P in differentiating ADHD and non-ADHD cases using receiver operating characteristics (ROC) analysis. This is the first attempt to empirically determine the level of functional impairment that differentiates ADHD children from normal controls. This observational study comprised 5-19-year-olds with physician-diagnosed ADHD (n = 476) and non-ADHD controls (n = 202). ROC analysis evaluated the ability of WFIRS-P to discriminate between ADHD and non-ADHD, and identified a WFIRS-P cut-off score that optimises correct classification. Data were analysed for the complete sample, for males versus females and for participants in two age groups (5-12 versus 13-19 years). Area under the curve (AUC) was 0.91 (95% confidence interval 0.88-0.93) for the overall WFIRS-P score, suggesting highly accurate classification of ADHD distinct from non-ADHD. Sensitivity (0.83) and specificity (0.85) were maximal for a mean overall WFIRS-P score of 0.65, suggesting that this is an appropriate threshold for differentiation. DeLong's test found no significant differences in AUCs for males versus females or 5-12 versus 13-19 years, suggesting that WFIRS-P is an accurate classifier of ADHD across gender and age. When assessing function, WFIRS-P appears to provide a simple and effective basis for differentiating between individuals with/without ADHD in terms of functional impairment. Disease-specific applications of QOL research.

  4. Combining density functional theory calculations, supercomputing, and data-driven methods to design new materials (Conference Presentation)

    Science.gov (United States)

    Jain, Anubhav

    2017-04-01

    Density functional theory (DFT) simulations solve for the electronic structure of materials starting from the Schrödinger equation. Many case studies have now demonstrated that researchers can often use DFT to design new compounds in the computer (e.g., for batteries, catalysts, and hydrogen storage) before synthesis and characterization in the lab. In this talk, I will focus on how DFT calculations can be executed on large supercomputing resources in order to generate very large data sets on new materials for functional applications. First, I will briefly describe the Materials Project, an effort at LBNL that has virtually characterized over 60,000 materials using DFT and has shared the results with over 17,000 registered users. Next, I will talk about how such data can help discover new materials, describing how preliminary computational screening led to the identification and confirmation of a new family of bulk AMX2 thermoelectric compounds with measured zT reaching 0.8. I will outline future plans for how such data-driven methods can be used to better understand the factors that control thermoelectric behavior, e.g., for the rational design of electronic band structures, in ways that are different from conventional approaches.

  5. A Parallel Supercomputer Implementation of a Biological Inspired Neural Network and its use for Pattern Recognition

    International Nuclear Information System (INIS)

    De Ladurantaye, Vincent; Lavoie, Jean; Bergeron, Jocelyn; Parenteau, Maxime; Lu Huizhong; Pichevar, Ramin; Rouat, Jean

    2012-01-01

    A parallel implementation of a large spiking neural network is proposed and evaluated. The neural network implements the binding by synchrony process using the Oscillatory Dynamic Link Matcher (ODLM). Scalability, speed and performance are compared for 2 implementations: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) running on clusters of multicore supercomputers and NVIDIA graphical processing units respectively. A global spiking list that represents at each instant the state of the neural network is described. This list indexes each neuron that fires during the current simulation time so that the influence of their spikes are simultaneously processed on all computing units. Our implementation shows a good scalability for very large networks. A complex and large spiking neural network has been implemented in parallel with success, thus paving the road towards real-life applications based on networks of spiking neurons. MPI offers a better scalability than CUDA, while the CUDA implementation on a GeForce GTX 285 gives the best cost to performance ratio. When running the neural network on the GTX 285, the processing speed is comparable to the MPI implementation on RQCHP's Mammouth parallel with 64 notes (128 cores).

  6. Dynamical systems

    CERN Document Server

    Sternberg, Shlomo

    2010-01-01

    Celebrated mathematician Shlomo Sternberg, a pioneer in the field of dynamical systems, created this modern one-semester introduction to the subject for his classes at Harvard University. Its wide-ranging treatment covers one-dimensional dynamics, differential equations, random walks, iterated function systems, symbolic dynamics, and Markov chains. Supplementary materials offer a variety of online components, including PowerPoint lecture slides for professors and MATLAB exercises.""Even though there are many dynamical systems books on the market, this book is bound to become a classic. The the

  7. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry

    2017-02-27

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey\\'s acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less

  8. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry; Etienne, Vincent; Gashawbeza, Ewenet; Curiel, Emesto Sandoval; Khan, Azizur; Feki, Saber; Kortas, Samuel

    2017-01-01

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey's acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less than

  9. Use of QUADRICS supercomputer as embedded simulator in emergency management systems; Utilizzo del calcolatore QUADRICS come simulatore in linea in un sistema di gestione delle emergenze

    Energy Technology Data Exchange (ETDEWEB)

    Bove, R.; Di Costanzo, G.; Ziparo, A. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dip. Energia

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system.

  10. Summaries of research and development activities by using supercomputer system of JAEA in FY2015. April 1, 2015 - March 31, 2016

    International Nuclear Information System (INIS)

    2017-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As shown in the fact that about 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology. In FY2015, the system was used for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue, as well as for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great number of R and D results accomplished by using the system in FY2015, as well as user support, operational records and overviews of the system, and so on. (author)

  11. Summaries of research and development activities by using supercomputer system of JAEA in FY2014. April 1, 2014 - March 31, 2015

    International Nuclear Information System (INIS)

    2016-02-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As shown in the fact that about 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology. In FY2014, the system was used for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue, as well as for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great number of R and D results accomplished by using the system in FY2014, as well as user support, operational records and overviews of the system, and so on. (author)

  12. Summaries of research and development activities by using supercomputer system of JAEA in FY2013. April 1, 2013 - March 31, 2014

    International Nuclear Information System (INIS)

    2015-02-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. About 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2013, the system was used not only for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science, but also for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue. This report presents a great amount of R and D results accomplished by using the system in FY2013, as well as user support, operational records and overviews of the system, and so on. (author)

  13. Summaries of research and development activities by using supercomputer system of JAEA in FY2012. April 1, 2012 - March 31, 2013

    International Nuclear Information System (INIS)

    2014-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As more than 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2012, the system was used not only for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science, but also for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as apriority issue. This report presents a great amount of R and D results accomplished by using the system in FY2012, as well as user support, operational records and overviews of the system, and so on. (author)

  14. Summaries of research and development activities by using supercomputer system of JAEA in FY2011. April 1, 2011 - March 31, 2012

    International Nuclear Information System (INIS)

    2013-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As more than 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2011, the system was used for analyses of the accident at the Fukushima Daiichi Nuclear Power Station and establishment of radioactive decontamination plan, as well as the JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great amount of R and D results accomplished by using the system in FY2011, as well as user support structure, operational records and overviews of the system, and so on. (author)

  15. Getting To Exascale: Applying Novel Parallel Programming Models To Lab Applications For The Next Generation Of Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Dube, Evi [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shereda, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nau, Lee [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Harris, Lance [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2010-09-27

    As supercomputing moves toward exascale, node architectures will change significantly. CPU core counts on nodes will increase by an order of magnitude or more. Heterogeneous architectures will become more commonplace, with GPUs or FPGAs providing additional computational power. Novel programming models may make better use of on-node parallelism in these new architectures than do current models. In this paper we examine several of these novel models – UPC, CUDA, and OpenCL –to determine their suitability to LLNL scientific application codes. Our study consisted of several phases: We conducted interviews with code teams and selected two codes to port; We learned how to program in the new models and ported the codes; We debugged and tuned the ported applications; We measured results, and documented our findings. We conclude that UPC is a challenge for porting code, Berkeley UPC is not very robust, and UPC is not suitable as a general alternative to OpenMP for a number of reasons. CUDA is well supported and robust but is a proprietary NVIDIA standard, while OpenCL is an open standard. Both are well suited to a specific set of application problems that can be run on GPUs, but some problems are not suited to GPUs. Further study of the landscape of novel models is recommended.

  16. Variations on a theme by Kepler

    CERN Document Server

    Guillemin, Victor W

    2006-01-01

    This book is based on the Colloquium Lectures presented by Shlomo Sternberg in 1990. The authors delve into the mysterious role that groups, especially Lie groups, play in revealing the laws of nature by focusing on the familiar example of Kepler motion: the motion of a planet under the attraction of the sun according to Kepler's laws. Newton realized that Kepler's second law-that equal areas are swept out in equal times-has to do with the fact that the force is directed radially to the sun. Kepler's second law is really the assertion of the conservation of angular momentum, reflecting the rot

  17. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to

  18. Word-embeddings Italian semantic spaces: A semantic model for psycholinguistic research

    Directory of Open Access Journals (Sweden)

    Marelli Marco

    2017-01-01

    Full Text Available Distributional semantics has been for long a source of successful models in psycholinguistics, permitting to obtain semantic estimates for a large number of words in an automatic and fast way. However, resources in this respect remain scarce or limitedly accessible for languages different from English. The present paper describes WEISS (Word-Embeddings Italian Semantic Space, a distributional semantic model based on Italian. WEISS includes models of semantic representations that are trained adopting state-of-the-art word-embeddings methods, applying neural networks to induce distributed representations for lexical meanings. The resource is evaluated against two test sets, demonstrating that WEISS obtains a better performance with respect to a baseline encoding word associations. Moreover, an extensive qualitative analysis of the WEISS output provides examples of the model potentialities in capturing several semantic phenomena. Two variants of WEISS are released and made easily accessible via web through the SNAUT graphic interface.

  19. 75 FR 45606 - Interagency Ocean Policy Task Force-Final Recommendations of the Interagency Ocean Policy Task Force

    Science.gov (United States)

    2010-08-03

    .../oceans or by writing to The Council on Environmental Quality, Attn: Michael Weiss, 722 Jackson Place, NW., Washington, DC 20503. FOR FURTHER INFORMATION CONTACT: Michael Weiss, Deputy Associate Director for Ocean and...

  20. The ASCI Network for SC '99: A Step on the Path to a 100 Gigabit Per Second Supercomputing Network

    Energy Technology Data Exchange (ETDEWEB)

    PRATT,THOMAS J.; TARMAN,THOMAS D.; MARTINEZ,LUIS M.; MILLER,MARC M.; ADAMS,ROGER L.; CHEN,HELEN Y.; BRANDT,JAMES M.; WYCKOFF,PETER S.

    2000-07-24

    This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within the community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.

  1. HYDRASTAR - a code for stochastic simulation of groundwater flow

    International Nuclear Information System (INIS)

    Norman, S.

    1992-05-01

    The computer code HYDRASTAR was developed as a tool for groundwater flow and transport simulations in the SKB 91 safety analysis project. Its conceptual ideas can be traced back to a report by Shlomo Neuman in 1988, see the reference section. The main idea of the code is the treatment of the rock as a stochastic continuum which separates it from the deterministic methods previously employed by SKB and also from the discrete fracture models. The current report is a comprehensive description of HYDRASTAR including such topics as regularization or upscaling of a hydraulic conductivity field, unconditional and conditional simulation of stochastic processes, numerical solvers for the hydrology and streamline equations and finally some proposals for future developments

  2. Risk and Resilience Factors for Combat-Related Posttraumatic Psychopathology and Post Combat Adjustment

    Science.gov (United States)

    2011-06-01

    E2 E3 E4 E5 E6 E7 E8 1 5 6 4 6 0 1 JFHQs 37th 16th 174th 73d 371st 1 5 8 4 2 3 Age of Soldier Suicides Ave Age = 30.56 Years Grade of Soldier...vulnerability (i.e. gene x environment interactions). This will also allow for integrated research utilizing neuroimaging, psychophysiological, and...and Non‐Veterans. International Journal of Social Psychiatry  1980; 27(3): 204‐212.  24.  Kuh D, Ben‐Shlomo  Y , Lynch J, Hallqvist J, and Power C. Life

  3. New generation of docking programs: Supercomputer validation of force fields and quantum-chemical methods for docking.

    Science.gov (United States)

    Sulimov, Alexey V; Kutov, Danil C; Katkova, Ekaterina V; Ilin, Ivan S; Sulimov, Vladimir B

    2017-11-01

    Discovery of new inhibitors of the protein associated with a given disease is the initial and most important stage of the whole process of the rational development of new pharmaceutical substances. New inhibitors block the active site of the target protein and the disease is cured. Computer-aided molecular modeling can considerably increase effectiveness of new inhibitors development. Reliable predictions of the target protein inhibition by a small molecule, ligand, is defined by the accuracy of docking programs. Such programs position a ligand in the target protein and estimate the protein-ligand binding energy. Positioning accuracy of modern docking programs is satisfactory. However, the accuracy of binding energy calculations is too low to predict good inhibitors. For effective application of docking programs to new inhibitors development the accuracy of binding energy calculations should be higher than 1kcal/mol. Reasons of limited accuracy of modern docking programs are discussed. One of the most important aspects limiting this accuracy is imperfection of protein-ligand energy calculations. Results of supercomputer validation of several force fields and quantum-chemical methods for docking are presented. The validation was performed by quasi-docking as follows. First, the low energy minima spectra of 16 protein-ligand complexes were found by exhaustive minima search in the MMFF94 force field. Second, energies of the lowest 8192 minima are recalculated with CHARMM force field and PM6-D3H4X and PM7 quantum-chemical methods for each complex. The analysis of minima energies reveals the docking positioning accuracies of the PM7 and PM6-D3H4X quantum-chemical methods and the CHARMM force field are close to one another and they are better than the positioning accuracy of the MMFF94 force field. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. The role of mechanical boundary conditions in the soft mode dynamics of PbTiO3.

    Science.gov (United States)

    McCash, Kevin; Mani, B K; Chang, C-M; Ponomareva, I

    2014-10-29

    The role of different mechanical boundary conditions in the soft mode dynamics of ferroelectric PbTiO3 is systematically investigated using first-principles-based simulations and analytical model. The change in the soft mode dynamics due to hydrostatic pressure, uniaxial and biaxial stresses and biaxial strains is studied in a wide temperature range. Our computations predict: (i) the existence of Curie-Weiss laws that relate the soft mode frequency to the stress or strain; (ii) a non-trivial temperature evolution of the associated Curie-Weiss constants; (iii) a qualitative difference between the soft mode response to stresses/strains and hydrostatic pressure. The latter finding implies that the Curie-Weiss pressure law commonly used for residual stress estimation may not apply for the cases of uniaxial and biaxial stresses and strains. On the other hand, our systematic study offers a way to eliminate this difficulty through the establishment of Curie-Weiss stress and strain laws. Implications of our predictions for some available experimental data are discussed.

  5. Reliability and validity of the Chinese version of Weiss Functional Impairment Scale-Parent form for school age children%Weiss功能缺陷量表父母版的信效度

    Institute of Scientific and Technical Information of China (English)

    钱英; 杜巧新; 曲姗; 王玉凤

    2011-01-01

    Objective: To test the reliability and validity of Chinese version of the Weiss Functional Impairment Scale-Parent form (WFIRS-P) in China. Methods: Totally 123 outpatients who met the diagnostic criteria of Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) for attention deficit hyperactivity disorder (ADHD) and 240 normal children were recruited in this study. The parents of the subjects completed the WFIRS-P. At the same time, the parents of 39 outpatients comleted the ADHD Rating Scale-IV (ADHD RS-IV) and Behavior Rating Inventory of Executive Function (BRIEF) and doctors who made diagnosis for these 39 outpatients filled in the Global Assessment Function (GAF) to test the criteria validity. One or two weeks later, the parents of 29 outpatients were required to complete the WFIRS-P again to test the test-retest reliability. Results: The test-retest reliability were 0. 61 - 0. 87 and the Cronbach a coefficients were 0. 70 - 0. 92. Subscale scores of WFIRS-P were significantly correlated with scores of ADHD RS-IV (r = 0. 32 - 0. 50, P < 0. 05), BRIEF (r = 0. 23 -0. 71, P < 0. 05) and GAF (r = - 0. 29 - - 0. 59, P < 0. 05). Lisrel Confirmatory factor analysis showed the 5 -subscale model of BRIEF was reasonable (CFI = 0. 97 for control group, 0. 89 for ADHD group, RMSEA < 0. 08). Compared with control group, the ADHD group got significant higher scores in all subscales of WFIRS-P respectively (Ps < 0. 01) . Conclusion: The Chinese version of the Weiss Functional Impairment Scale-Parent form WFIRS-P have adequate reliability and validity.%目的:评价Weiss功能缺陷量表父母版(WFIRS-P)中文版的信效度.方法:选取符合美国精神障碍诊断与统计手册第四版(DSM-Ⅳ)注意缺陷多动障碍(ADHD)诊断标准的门诊患者123名及正常儿童240名,同时请病例组中39名儿童父母填写执行功能行为评定量表父母版(BRIEF)和ADHD评定量表-Ⅳ(ADHD RS-Ⅳ),并请进行诊断的医师对这39

  6. Scalable geocomputation: evolving an environmental model building platform from single-core to supercomputers

    Science.gov (United States)

    Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek

    2017-04-01

    There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework (http://www.pcraster.eu) providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using

  7. Magnetic susceptibility of the rare earth tungsten oxide bronzes of the defected perovskite-type structure (Rsub(x)WO/sub 3/)

    Energy Technology Data Exchange (ETDEWEB)

    Gesicki, A; Polaczek, A [Warsaw Univ. (Poland)

    1975-01-01

    Magnetic susceptibility of rare earth tungsten bronzes Rsub(x)WO/sub 3/ of cubic symmetry was measured in the 80-293 K range with the Gouy method. In disagreement with the data reported by other authors it was stated that the Curie-Weiss law with negative Weiss parameter was fulfilled in each case. Possible coupling mechanisms are briefly discussed.

  8. 'It sounds like a riddle'

    DEFF Research Database (Denmark)

    Rasmussen, M.V.; Rasmussen, Mikkel Vedby

    2004-01-01

    of this emerging research programme on risk arguing that it offers a way to overcome the debate about whether to apply a 'broad' or 'narrow' concept of security; a debate which is stifling the discipline's ability to appreciate the 'war on terrorism' as an example of a new security practice. Discussing the nature......A research programme on 'reflexive security' is emerging, as a number of students of international security are applying sociological insights of 'risk society' to understand new discourses and practices of security. This research note maps the current achievements and future challenges...... of strategy in a risk environment, the paper outlines the consequences for applying the concept of reflexive rationality to strategy. Doing so, I address some of the concerns on how to study 'reflexive security' previously raised by Shlomo Griner in Millennium....

  9. Army Officer Job Analysis: Identifying Performance Requirements to Inform Officer Selection and Assignment

    Science.gov (United States)

    2011-08-01

    satisfaction), it is influenced by a variety of factors including affect , cognitions , and behaviors (Weiss, 2002; Weiss, Nicholas, & Daus, 1999). Research...more prosocial behaviors such as assisting coworkers. Job Involvement. Job involvement refers to the degree to which one psychologically...variations in affective experiences over time. Organizational Behavior and Human Decision Processes, 78, 1- 24. doi: 10.1006/obhd.1999.2824 Wong, L

  10. Grassroots Supercomputing

    CERN Multimedia

    Buchanan, Mark

    2005-01-01

    What started out as a way for SETI to plow through its piles or radio-signal data from deep space has turned into a powerful research tool as computer users acrosse the globe donate their screen-saver time to projects as diverse as climate-change prediction, gravitational-wave searches, and protein folding (4 pages)

  11. Grain Boundary Engineering of Lithium-Ion-Conducting Lithium Lanthanum Titanate for Lithium-Air Batteries

    Science.gov (United States)

    2016-01-01

    Titanate for Lithium-Air Batteries by Victoria L Blair, Claire V Weiss Brennan, and Joseph M Marsico Approved for public...Air Batteries by Victoria L Blair and Claire V Weiss Brennan Weapons and Materials Research Directorate, ARL Joseph M Marsico Rochester...Titanate for Lithium-Air Batteries 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Victoria L Blair, Claire V

  12. Assessment of Assembling Objects (AO) for Improving Predictive Performance of the Armed Forces Qualification Test

    Science.gov (United States)

    2011-04-01

    Willis, 2000; Malinowski , 2001; Weiss, Kemmler, Deisenhammer, Fleischhacker, & Delazer, 2003; Wise, Welsh, Grafton, Foley, Earles, Sawin, & Divgi, 1992... Malinowski , 2001; Weiss et al., 2003; Wise et al., 1992). For example, Held, Alderton, Foley, and Segall (1993) found that men scored higher than women on...cognitive abilities across the adult life span. Aging, Neuropsychology, and Cognition, 7 (1), 32-53. Malinowski , J. (2001). Mental rotation and

  13. 195Pt and 119Sn Knight shifts of U3Pt3Sn4

    International Nuclear Information System (INIS)

    Kojima, K.; Takabatake, T.; Harada, A.; Hihara, T.

    1995-01-01

    The 195 Pt and 119 Sn Knight shifts in U 3 Pt 3 Sn 4 have been measured in the temperature range 4.2-298K. They exhibit Curie-Weiss like behaviors above about 50K and remain constant below about 10K. This suggests that the deviation of χ(T) from the modified Curie-Weiss law is an intrinsic property of U 3 Pt 3 Sn 4 . ((orig.))

  14. MaMiCo: Transient multi-instance molecular-continuum flow simulation on supercomputers

    Science.gov (United States)

    Neumann, Philipp; Bian, Xin

    2017-11-01

    We present extensions of the macro-micro-coupling tool MaMiCo, which was designed to couple continuum fluid dynamics solvers with discrete particle dynamics. To enable local extraction of smooth flow field quantities especially on rather short time scales, sampling over an ensemble of molecular dynamics simulations is introduced. We provide details on these extensions including the transient coupling algorithm, open boundary forcing, and multi-instance sampling. Furthermore, we validate the coupling in Couette flow using different particle simulation software packages and particle models, i.e. molecular dynamics and dissipative particle dynamics. Finally, we demonstrate the parallel scalability of the molecular-continuum simulations by using up to 65 536 compute cores of the supercomputer Shaheen II located at KAUST. Program Files doi:http://dx.doi.org/10.17632/w7rgdrhb85.1 Licensing provisions: BSD 3-clause Programming language: C, C++ External routines/libraries: For compiling: SCons, MPI (optional) Subprograms used: ESPResSo, LAMMPS, ls1 mardyn, waLBerla For installation procedures of the MaMiCo interfaces, see the README files in the respective code directories located in coupling/interface/impl. Journal reference of previous version: P. Neumann, H. Flohr, R. Arora, P. Jarmatz, N. Tchipev, H.-J. Bungartz. MaMiCo: Software design for parallel molecular-continuum flow simulations, Computer Physics Communications 200: 324-335, 2016 Does the new version supersede the previous version?: Yes. The functionality of the previous version is completely retained in the new version. Nature of problem: Coupled molecular-continuum simulation for multi-resolution fluid dynamics: parts of the domain are resolved by molecular dynamics or another particle-based solver whereas large parts are covered by a mesh-based CFD solver, e.g. a lattice Boltzmann automaton. Solution method: We couple existing MD and CFD solvers via MaMiCo (macro-micro coupling tool). Data exchange and

  15. Understanding the Causes of Civil Wars in Post-Colonial Sub-Saharan Africa. Case study: Sierra Leone and the Role of women in the Search for Peace

    OpenAIRE

    Sesay, Adama

    2013-01-01

    It is widely understood or assumed among scholars like Thomas Weiss, that civil wars in Africa are mainly wars for natural resources. This statement needs careful evaluation, and it is for this reason that this study will use Weiss`s theories on the causes of wars in sub-Sahara Africa as a background for understanding the Sierra Leone conflict. In addition, as the title implies, this paper further aims to investigate the war in Sierra Leone and most...

  16. Magnetic properties of the alkali metal ozonides KO3, RbO3, and CsO3

    International Nuclear Information System (INIS)

    Lueken, H.; Deussen, M.; Jansen, M.; Hesse, W.; Schnick, W.

    1987-01-01

    The magnetic susceptibilities of KO 3 , RbO 3 and CsO 3 have been determined between 3.6 and 250 K. Above 50 K Curie-Weiss behaviour is observed. Magnetic moments of 1.74 μ B (KO 3 , CsO 3 ) and 1.80 μ B (RbO 3 ) calculated from the Curie-Weiss straight lines correspond with spin-only moments expected for isolated O 3 - species with one unpaired electron. The Weiss constants Θ are -34 K (KO 3 ), -23 K (RbO 3 ) and -10 K (CsO 3 ). The low temperature behaviour of KO 3 and RbO 3 (broad maxima in susceptibility at 20 and 17 K, respectively, and minima at 6 K) is typical of systems which show with decreasing temperature low-dimensional antiferromagnetic and three-dimensional magnetic ordering. Inspecting the intermolecular distances between oxygen atoms the pathways of exchange interactions are discussed. (author)

  17. Cavernous hemangioma of the knee - case report

    OpenAIRE

    Weiss, Marcin; Dolata, Tomasz; Weiss, Waldemar; Maksymiak, Martyna; Kałużny, Krystian; Kałużna, Anna; Zukow, Walery; Hagner Derengowska, Magdalena

    2018-01-01

    Weiss Marcin, Dolata Tomasz, Weiss Waldemar, Maksymiak Martyna, Kałużny Krystian, Kałużna Anna, Zukow Walery, Hagner‑Derengowska Magdalena. Cavernous hemangioma of the knee - case report. Journal of Education, Health and Sport. 2018;8(4):318-325. eISNN 2391-8306. DOI http://dx.doi.org/10.5281/zenodo.1226645 http://ojs.ukw.edu.pl/index.php/johs/article/view/5438 The journal has had 7 points in Ministry of Science and Higher Education parametric evalu...

  18. Treatment of intraarticular displaced fractures of the calcaneus bone using nail blocked calcanail

    OpenAIRE

    Weiss, Marcin; Dolata, Tomasz; Weiss, Waldemar; Maksymiak, Martyna; Kałużny, Krystian; Kałużna, Anna; Zukow, Walery; Hagner‑Derengowska, Magdalena

    2018-01-01

    Weiss Marcin, Dolata Tomasz, Weiss Waldemar, Maksymiak Martyna, Kałużny Krystian, Kałużna Anna, Zukow Walery, Hagner‑Derengowska Magdalena. Treatment of intraarticular displaced fractures of the calcaneus bone using nail blocked calcanail. Journal of Education, Health and Sport. 2018;8(4):338-345. eISNN 2391-8306. DOI http://dx.doi.org/10.5281/zenodo.1226782 http://ojs.ukw.edu.pl/index.php/johs/article/view/5439 https://pbn.nauka.gov.pl/sedno-webapp/works/863609 ...

  19. White dwarfs - black holes. Weisse Zwerge - schwarze Loecher

    Energy Technology Data Exchange (ETDEWEB)

    Sexl, R; Sexl, H

    1975-04-01

    The physical arguments and problems of relativistic astrophysics are presented in a correct way, but without any higher mathematics. The book is addressed to teachers, experimental physicists, and others with a basic knowledge covering an introductory lecture in physics. The issues dealt with are: fundamentals of general relativity, classical tests of general relativity, curved space-time, stars and planets, pulsars, gravitational collapse and black holes, the search for black holes, gravitational waves, cosmology, cosmogony, and the early universe.

  20. The Tallinntellect / Toomas Hendrik Ilves ; intervjueerinud Michael Weiss

    Index Scriptorium Estoniae

    Ilves, Toomas Hendrik, 1953-

    2013-01-01

    Vene-USA poliitilistest suhetest, Eesti säästupoliitikast, USA käitumisest totalitaarsete režiimidega, presidendi lemmikkirjanik Vladimir Nabokovist ning praeguste poliitikute endisest kuulumisest kommunistlikku parteisse

  1. Three-dimensional kinetic simulations of whistler turbulence in solar wind on parallel supercomputers

    Science.gov (United States)

    Chang, Ouliang

    The objective of this dissertation is to study the physics of whistler turbulence evolution and its role in energy transport and dissipation in the solar wind plasmas through computational and theoretical investigations. This dissertation presents the first fully three-dimensional (3D) particle-in-cell (PIC) simulations of whistler turbulence forward cascade in a homogeneous, collisionless plasma with a uniform background magnetic field B o, and the first 3D PIC simulation of whistler turbulence with both forward and inverse cascades. Such computationally demanding research is made possible through the use of massively parallel, high performance electromagnetic PIC simulations on state-of-the-art supercomputers. Simulations are carried out to study characteristic properties of whistler turbulence under variable solar wind fluctuation amplitude (epsilon e) and electron beta (betae), relative contributions to energy dissipation and electron heating in whistler turbulence from the quasilinear scenario and the intermittency scenario, and whistler turbulence preferential cascading direction and wavevector anisotropy. The 3D simulations of whistler turbulence exhibit a forward cascade of fluctuations into broadband, anisotropic, turbulent spectrum at shorter wavelengths with wavevectors preferentially quasi-perpendicular to B o. The overall electron heating yields T ∥ > T⊥ for all epsilone and betae values, indicating the primary linear wave-particle interaction is Landau damping. But linear wave-particle interactions play a minor role in shaping the wavevector spectrum, whereas nonlinear wave-wave interactions are overall stronger and faster processes, and ultimately determine the wavevector anisotropy. Simulated magnetic energy spectra as function of wavenumber show a spectral break to steeper slopes, which scales as k⊥lambda e ≃ 1 independent of betae values, where lambdae is electron inertial length, qualitatively similar to solar wind observations. Specific

  2. Algorithms for supercomputers

    International Nuclear Information System (INIS)

    Alder, B.J.

    1986-01-01

    Better numerical procedures, improved computational power and additional physical insights have contributed significantly to progress in dealing with classical and quantum statistical mechanics problems. Past developments are discussed and future possibilities outlined

  3. Super-computer architecture

    CERN Document Server

    Hockney, R W

    1977-01-01

    This paper examines the design of the top-of-the-range, scientific, number-crunching computers. The market for such computers is not as large as that for smaller machines, but on the other hand it is by no means negligible. The present work-horse machines in this category are the CDC 7600 and IBM 360/195, and over fifty of the former machines have been sold. The types of installation that form the market for such machines are not only the major scientific research laboratories in the major countries-such as Los Alamos, CERN, Rutherford laboratory-but also major universities or university networks. It is also true that, as with sports cars, innovations made to satisfy the top of the market today often become the standard for the medium-scale computer of tomorrow. Hence there is considerable interest in examining present developments in this area. (0 refs).

  4. Algorithms for supercomputers

    International Nuclear Information System (INIS)

    Alder, B.J.

    1985-12-01

    Better numerical procedures, improved computational power and additional physical insights have contributed significantly to progress in dealing with classical and quantum statistical mechanics problems. Past developments are discussed and future possibilities outlined

  5. Magnetic and Moessbauer-spectroscopic studies of iron-clusters in zeolites. [Reduction of ferrous ions

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, F; Gunsser, W; Knappwost, A [Hamburg Univ. (F.R. Germany). Inst. fuer Physikalische Chemie

    1975-12-01

    Iron clusters have been prepared within zeolite holes by reduction of zeolites containing ferrous ions. The diameter of these particles must therefore be smaller that 13 A. They are superparamagnetic and their Moessbauer spectra show no HFS, even at 4K. The temperature dependence of the magnetic susceptibility of the unreduced zeolites obeys a Curie-Weiss law with p/sub eff/ = 4.45 ..mu..B and THETA = 105K. The Weiss curves of the reduced samples lie distinctly below those of the bulk material.

  6. The Structure of Affine Buildings

    CERN Document Server

    Weiss, Richard M

    2009-01-01

    In The Structure of Affine Buildings, Richard Weiss gives a detailed presentation of the complete proof of the classification of Bruhat-Tits buildings first completed by Jacques Tits in 1986. The book includes numerous results about automorphisms, completions, and residues of these buildings. It also includes tables correlating the results in the locally finite case with the results of Tits's classification of absolutely simple algebraic groups defined over a local field. A companion to Weiss's The Structure of Spherical Buildings, The Structure of Affine Buildings is organized around the clas

  7. Leveraging HPC resources for High Energy Physics

    International Nuclear Information System (INIS)

    O'Brien, B; Washbrook, A; Walker, R

    2014-01-01

    High Performance Computing (HPC) supercomputers provide unprecedented computing power for a diverse range of scientific applications. The most powerful supercomputers now deliver petaflop peak performance with the expectation of 'exascale' technologies available in the next five years. More recent HPC facilities use x86-based architectures managed by Linux-based operating systems which could potentially allow unmodified HEP software to be run on supercomputers. There is now a renewed interest from both the LHC experiments and the HPC community to accommodate data analysis and event simulation production on HPC facilities. This study provides an outline of the challenges faced when incorporating HPC resources for HEP software by using the HECToR supercomputer as a demonstrator.

  8. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  9. A high level language for a high performance computer

    Science.gov (United States)

    Perrott, R. H.

    1978-01-01

    The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.

  10. Dal CERN, flusso si dati a una media di 600 megabytes al secondo per dieci giorni consecutivi

    CERN Multimedia

    2005-01-01

    The supercomputer Grid took up successfully its first technologic challenge. Egiht supercomputing centers have supported on internet a continuous flow of data from CERN in Geneva and directed them to seven centers in Europe and United States

  11. Study on the climate system and mass transport by a climate model

    International Nuclear Information System (INIS)

    Numaguti, A.; Sugata, S.; Takahashi, M.; Nakajima, T.; Sumi, A.

    1997-01-01

    The Center for Global Environmental Research (CGER), an organ of the National Institute for Environmental Studies of the Environment Agency of Japan, was established in October 1990 to contribute broadly to the scientific understanding of global change, and to the elucidation of and solution for our pressing environmental problems. CGER conducts environmental research from interdisciplinary, multiagency, and international perspective, provides research support facilities such as a supercomputer and databases, and offers its own data from long-term monitoring of the global environment. In March 1992, CGER installed a supercomputer system (NEC SX-3, Model 14) to facilitate research on global change. The system is open to environmental researchers worldwide. Proposed research programs are evaluated by the Supercomputer Steering Committee which consists of leading scientists in climate modeling, atmospheric chemistry, oceanic circulation, and computer science. After project approval, authorization for system usage is provided. In 1995 and 1996, several research proposals were designated as priority research and allocated larger shares of computer resources. The CGER supercomputer monograph report Vol. 3 is a report of priority research of CGER's supercomputer. The report covers the description of CCSR-NIES atmospheric general circulation model, lagragian general circulation based on the time-scale of particle motion, and ability of the CCSR-NIES atmospheric general circulation model in the stratosphere. The results obtained from these three studies are described in three chapters. We hope this report provides you with useful information on the global environmental research conducted on our supercomputer

  12. Massively Parallel QCD

    International Nuclear Information System (INIS)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-01-01

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

  13. Lagrangian statistics and flow topology in forced two-dimensional turbulence.

    Science.gov (United States)

    Kadoch, B; Del-Castillo-Negrete, D; Bos, W J T; Schneider, K

    2011-03-01

    A study of the relationship between Lagrangian statistics and flow topology in fluid turbulence is presented. The topology is characterized using the Weiss criterion, which provides a conceptually simple tool to partition the flow into topologically different regions: elliptic (vortex dominated), hyperbolic (deformation dominated), and intermediate (turbulent background). The flow corresponds to forced two-dimensional Navier-Stokes turbulence in doubly periodic and circular bounded domains, the latter with no-slip boundary conditions. In the double periodic domain, the probability density function (pdf) of the Weiss field exhibits a negative skewness consistent with the fact that in periodic domains the flow is dominated by coherent vortex structures. On the other hand, in the circular domain, the elliptic and hyperbolic regions seem to be statistically similar. We follow a Lagrangian approach and obtain the statistics by tracking large ensembles of passively advected tracers. The pdfs of residence time in the topologically different regions are computed introducing the Lagrangian Weiss field, i.e., the Weiss field computed along the particles' trajectories. In elliptic and hyperbolic regions, the pdfs of the residence time have self-similar algebraic decaying tails. In contrast, in the intermediate regions the pdf has exponential decaying tails. The conditional pdfs (with respect to the flow topology) of the Lagrangian velocity exhibit Gaussian-like behavior in the periodic and in the bounded domains. In contrast to the freely decaying turbulence case, the conditional pdfs of the Lagrangian acceleration in forced turbulence show a comparable level of intermittency in both the periodic and the bounded domains. The conditional pdfs of the Lagrangian curvature are characterized, in all cases, by self-similar power-law behavior with a decay exponent of order -2.

  14. Performance Analysis of FEM Algorithmson GPU and Many-Core Architectures

    KAUST Repository

    Khurram, Rooh; Kortas, Samuel

    2015-01-01

    -only Exascale systems will be unsustainable, thus accelerators such as graphic processing units (GPUs) and many-integrated-core (MIC) will likely be the integral part of the TOP500 (http://www.top500.org/) supercomputers, beyond 2020. The emerging supercomputer

  15. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2014-01-01

    -scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel

  16. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Deng, Xiaogang; Zhang, Lilun [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Fang, Jianbin [Parallel and Distributed Systems Group, Delft University of Technology, Delft 2628CD (Netherlands); Wang, Guangxue; Jiang, Yi [State Key Laboratory of Aerodynamics, P.O. Box 211, Mianyang 621000 (China); Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua [College of Computer Science, National University of Defense Technology, Changsha 410073 (China)

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  17. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    International Nuclear Information System (INIS)

    Xu, Chuanfu; Deng, Xiaogang; Zhang, Lilun; Fang, Jianbin; Wang, Guangxue; Jiang, Yi; Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua

    2014-01-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  18. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Weingarten, D.

    1986-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating-point processors arrangedin a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics

  19. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1985-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating- point processors arranged in a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable nonblocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics

  20. Supercomputer debugging workshop `92

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.S.

    1993-02-01

    This report contains papers or viewgraphs on the following topics: The ABCs of Debugging in the 1990s; Cray Computer Corporation; Thinking Machines Corporation; Cray Research, Incorporated; Sun Microsystems, Inc; Kendall Square Research; The Effects of Register Allocation and Instruction Scheduling on Symbolic Debugging; Debugging Optimized Code: Currency Determination with Data Flow; A Debugging Tool for Parallel and Distributed Programs; Analyzing Traces of Parallel Programs Containing Semaphore Synchronization; Compile-time Support for Efficient Data Race Detection in Shared-Memory Parallel Programs; Direct Manipulation Techniques for Parallel Debuggers; Transparent Observation of XENOOPS Objects; A Parallel Software Monitor for Debugging and Performance Tools on Distributed Memory Multicomputers; Profiling Performance of Inter-Processor Communications in an iWarp Torus; The Application of Code Instrumentation Technology in the Los Alamos Debugger; and CXdb: The Road to Remote Debugging.

  1. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1985-01-01

    GF11 is a parallel computer currently under construction at the Yorktown Research Center. The machine incorporates 576 floating-point processors arranged in a modified SIMD architecture. Each processor has space for 2 Mbytes of memory and is capable of 20 MFLOPS, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 GFLOPS. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics, a proposed theory of the elementary particles which participate in nuclear interactions

  2. Easy Access to HPC Resources through the Application GUI

    KAUST Repository

    van Waveren, Matthijs

    2016-11-01

    The computing environment at the King Abdullah University of Science and Technology (KAUST) is growing in size and complexity. KAUST hosts the tenth fastest supercomputer in the world (Shaheen II) and several HPC clusters. Researchers can be inhibited by the complexity, as they need to learn new languages and execute many tasks in order to access the HPC clusters and the supercomputer. In order to simplify the access, we have developed an interface between the applications and the clusters and supercomputer that automates the transfer of input data and job submission and also the retrieval of results to the researcher’s local workstation. The innovation is that the user now submits his jobs from within the application GUI on his workstation, and does not have to directly log into the clusters or supercomputer anymore. This article details the solution and its benefits to the researchers.

  3. Parallel processor programs in the Federal Government

    Science.gov (United States)

    Schneck, P. B.; Austin, D.; Squires, S. L.; Lehmann, J.; Mizell, D.; Wallgren, K.

    1985-01-01

    In 1982, a report dealing with the nation's research needs in high-speed computing called for increased access to supercomputing resources for the research community, research in computational mathematics, and increased research in the technology base needed for the next generation of supercomputers. Since that time a number of programs addressing future generations of computers, particularly parallel processors, have been started by U.S. government agencies. The present paper provides a description of the largest government programs in parallel processing. Established in fiscal year 1985 by the Institute for Defense Analyses for the National Security Agency, the Supercomputing Research Center will pursue research to advance the state of the art in supercomputing. Attention is also given to the DOE applied mathematical sciences research program, the NYU Ultracomputer project, the DARPA multiprocessor system architectures program, NSF research on multiprocessor systems, ONR activities in parallel computing, and NASA parallel processor projects.

  4. A cegueira histérica de Adolf Hitler: histórico de um boletim médico The hysterical blindness of Adolf Hitler: record of a medical chart

    Directory of Open Access Journals (Sweden)

    Gerhard Köpf

    2006-01-01

    Full Text Available Em 1918, no hospital militar de Pasewalk, o psiquiatra professor Edmund Forster trata, por meio da hipnose, o cabo Adolf Hitler de uma "neurose de guerra" (cegueira histérica. Em 1933, Hitler assume o poder sobre a Alemanha nazista. Pouco tempo depois, Forster entra em contato com um grupo de escritores que viviam em exílio em Paris e passa sigilosamente a eles os seus conhecimentos sobre o caso. O escritor Ernst Weiss, também médico, usa essas informações de Forster para escrever o seu romance A Testemunha Ocular (Der Augenzeuge, que é publicado somente em 1963. O professor Forster comete suicídio em 1933, depois de sofrer uma campanha de denúncias difamantes. Weiss também se suicida em 1940, quando as tropas alemãs invadem Paris. Várias outras pessoas relacionadas ao boletim médico sobre Hitler são assassinadas pela Gestapo.In 1918, in a military reserve hospital located in the small Pomeranian town of Pasewalk, the neuropsychiatrist Prof. Edmund Forster treated Adolf Hitler, an Austrian caporal suffering from a war neurosis (hysterical blindness, using suggestive techniques. Soon after the Hitler’s ascension to the power in the Nazi Germany, in 1933, Dr. Forster met with a group of exiled writers living in Paris and secretly gave them the information about the case. The writer Ernst Weiss, that was also a physician, latter used this information in order to produce his roman "The Eye Witness", which would be published only in 1963. In 1933, Prof. Forster committed suicide in strange circunstances after successive defamatory statements against him. Also Weiss committed suicide in 1940, when German troops invaded Paris. The Gestapo also murdered several other persons involved in the Hitler’s medical chart.

  5. Building the Teraflops/Petabytes Production Computing Center

    International Nuclear Information System (INIS)

    Kramer, William T.C.; Lucas, Don; Simon, Horst D.

    1999-01-01

    In just one decade, the 1990s, supercomputer centers have undergone two fundamental transitions which require rethinking their operation and their role in high performance computing. The first transition in the early to mid-1990s resulted from a technology change in high performance computing architecture. Highly parallel distributed memory machines built from commodity parts increased the operational complexity of the supercomputer center, and required the introduction of intellectual services as equally important components of the center. The second transition is happening in the late 1990s as centers are introducing loosely coupled clusters of SMPs as their premier high performance computing platforms, while dealing with an ever-increasing volume of data. In addition, increasing network bandwidth enables new modes of use of a supercomputer center, in particular, computational grid applications. In this paper we describe what steps NERSC is taking to address these issues and stay at the leading edge of supercomputing centers.; N

  6. Exascale Data Analysis

    CERN Multimedia

    CERN. Geneva; Fitch, Blake

    2011-01-01

    Traditionaly, the primary role of supercomputers was to create data, primarily for simulation applications. Due to usage and technology trends, supercomputers are increasingly also used for data analysis. Some of this data is from simulations, but there is also a rapidly increasingly amount of real-world science and business data to be analyzed. We briefly overview Blue Gene and other current supercomputer architectures. We outline future architectures, up to the Exascale supercomputers expected in the 2020 time frame. We focus on the data analysis challenges and opportunites, especially those concerning Flash and other up-and-coming storage class memory. About the speakers Blake G. Fitch has been with IBM Research, Yorktown Heights, NY since 1987, mainly pursuing interests in parallel systems. He joined the Scalable Parallel Systems Group in 1990, contributing to research and development that culminated in the IBM scalable parallel system (SP*) product. His research interests have focused on applicatio...

  7. Delocalized and localized states of eg electrons in half-doped manganites.

    Science.gov (United States)

    Winkler, E L; Tovar, M; Causa, M T

    2013-07-24

    We have studied the magnetic behaviour of half-doped manganite Y0.5Ca0.5MnO3 in an extended range of temperatures by means of magnetic susceptibility, χ(T), and electron spin resonance (ESR) experiments. At high temperature the system crystallizes in an orthorhombic structure. The resistivity value, ρ ≃ 0.05 Ω cm at 500 K, indicates a metallic behaviour, while the Curie-Weiss dependence of χ(T) and the thermal evolution of the ESR parameters are very well described by a model that considers a system conformed by localized Mn(4+) cores, [Formula: see text], and itinerant, eg, electrons. The strong coupling between t2g and eg electrons results in an enhanced Curie constant and an FM Curie-Weiss temperature that overcomes the AFM interactions between the [Formula: see text] cores. A transition to a more distorted phase is observed at T ≈ 500 K and signatures of localization of the eg electrons appear in the χ(T) behaviour below 300 K. A new Curie-Weiss regime is observed, where the Curie-constant value is consistent with dimer formation. Based on mean-field calculations, the dimer formation is predicted as a function of the interaction strength between the t2g and eg electrons.

  8. Lattice gauge theory using parallel processors

    International Nuclear Information System (INIS)

    Lee, T.D.; Chou, K.C.; Zichichi, A.

    1987-01-01

    The book's contents include: Lattice Gauge Theory Lectures: Introduction and Current Fermion Simulations; Monte Carlo Algorithms for Lattice Gauge Theory; Specialized Computers for Lattice Gauge Theory; Lattice Gauge Theory at Finite Temperature: A Monte Carlo Study; Computational Method - An Elementary Introduction to the Langevin Equation, Present Status of Numerical Quantum Chromodynamics; Random Lattice Field Theory; The GF11 Processor and Compiler; and The APE Computer and First Physics Results; Columbia Supercomputer Project: Parallel Supercomputer for Lattice QCD; Statistical and Systematic Errors in Numerical Simulations; Monte Carlo Simulation for LGT and Programming Techniques on the Columbia Supercomputer; Food for Thought: Five Lectures on Lattice Gauge Theory

  9. Schalm's veterinary hematology

    National Research Council Canada - National Science Library

    Weiss, Douglas J; Wardrop, K. Jane; Schalm, O. W

    2010-01-01

    ... . . . . . . . . . . . . . . . . . . . . . . . . 69 Douglas J. Weiss C H A P T E R Design and Methods Used for Preclinical Hematotoxicity Studies 71 1 2 WILLIAM J. REAGAN, FLORENCE M. POITOUT-BELISSENT...

  10. MVP utilization for PWR design code

    International Nuclear Information System (INIS)

    Matsumoto, Hideki; Tahara, Yoshihisa

    2001-01-01

    MHI studies the method of the spatially dependent resonance cross sections so as to predict the power distribution in a fuel pellet accurately. For this purpose, the multiband method and the Stoker/Weiss method were implemented to the 2 dimensional transport code PHOENIX-P, and the methods were validated by comparing them with MVP code. Although the appropriate reference was not obtain from the deterministic codes on the resonance cross section study, now the Monte Carlo code MVP result is available and useful as reference. It is shown here how MVP is used to develop the multiband method and the Stoker/Weiss method, and how effective the result of MVP is on the study of the resonance cross sections. (author)

  11. Advances in petascale kinetic plasma simulation with VPIC and Roadrunner

    Energy Technology Data Exchange (ETDEWEB)

    Bowers, Kevin J [Los Alamos National Laboratory; Albright, Brian J [Los Alamos National Laboratory; Yin, Lin [Los Alamos National Laboratory; Daughton, William S [Los Alamos National Laboratory; Roytershteyn, Vadim [Los Alamos National Laboratory; Kwan, Thomas J T [Los Alamos National Laboratory

    2009-01-01

    VPIC, a first-principles 3d electromagnetic charge-conserving relativistic kinetic particle-in-cell (PIC) code, was recently adapted to run on Los Alamos's Roadrunner, the first supercomputer to break a petaflop (10{sup 15} floating point operations per second) in the TOP500 supercomputer performance rankings. They give a brief overview of the modeling capabilities and optimization techniques used in VPIC and the computational characteristics of petascale supercomputers like Roadrunner. They then discuss three applications enabled by VPIC's unprecedented performance on Roadrunner: modeling laser plasma interaction in upcoming inertial confinement fusion experiments at the National Ignition Facility (NIF), modeling short pulse laser GeV ion acceleration and modeling reconnection in magnetic confinement fusion experiments.

  12. Southern African Business Review - Vol 15, No 1 (2011)

    African Journals Online (AJOL)

    Supply chain management problems at South African automotive component manufacturers · EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT. MJ Naude, JA Badenhorst-Weiss ...

  13. Lawrence Livermore National Laboratory selects Intel Itanium 2 processors for world's most powerful Linux cluster

    CERN Multimedia

    2003-01-01

    "Intel Corporation, system manufacturer California Digital and the University of California at Lawrence Livermore National Laboratory (LLNL) today announced they are building one of the world's most powerful supercomputers. The supercomputer project, codenamed "Thunder," uses nearly 4,000 Intel® Itanium® 2 processors... is expected to be complete in January 2004" (1 page).

  14. Feodaalne verepulm teleekraanil / Aare Ermel

    Index Scriptorium Estoniae

    Ermel, Aare, 1957-2013

    2011-01-01

    Alates 8. septembrist Fox Life'i eestikeelsel kanalil jooksma hakkavast George R. R. Martini raamatul põhinevast USA telekanali HBO telesarjast "Troonide mäng" (sarja loojad David Benioff ja Dan Weiss)

  15. Parallel adaptation of a vectorised quantumchemical program system

    International Nuclear Information System (INIS)

    Van Corler, L.C.H.; Van Lenthe, J.H.

    1987-01-01

    Supercomputers, like the CRAY 1 or the Cyber 205, have had, and still have, a marked influence on Quantum Chemistry. Vectorization has led to a considerable increase in the performance of Quantum Chemistry programs. However, clockcycle times more than a factor 10 smaller than those of the present supercomputers are not to be expected. Therefore future supercomputers will have to depend on parallel structures. Recently, the first examples of such supercomputers have been installed. To be prepared for this new generation of (parallel) supercomputers one should consider the concepts one wants to use and the kind of problems one will encounter during implementation of existing vectorized programs on those parallel systems. The authors implemented four important parts of a large quantumchemical program system (ATMOL), i.e. integrals, SCF, 4-index and Direct-CI in the parallel environment at ECSEC (Rome, Italy). This system offers simulated parallellism on the host computer (IBM 4381) and real parallellism on at most 10 attached processors (FPS-164). Quantumchemical programs usually handle large amounts of data and very large, often sparse matrices. The transfer of that many data can cause problems concerning communication and overhead, in view of which shared memory and shared disks must be considered. The strategy and the tools that were used to parallellise the programs are shown. Also, some examples are presented to illustrate effectiveness and performance of the system in Rome for these type of calculations

  16. JMBR VOLUME 15 Number 2 Decemebr 2016 - correction.cdr

    African Journals Online (AJOL)

    Fine Print

    Mallory-Weiss tears, acute stress gastritis, Dieulafoy lesion, non-steroidal anti- inflammatory drugs ... age (> 60 years), in both males and females2. ... anxious, pale, dehydrated, afebrile, anicteric ... frank blood were evacuated on digital.

  17. Cellular computing

    National Research Council Canada - National Science Library

    Amos, Martyn

    2004-01-01

    ... 120 Ron Weiss, Thomas F. Knight Jr., and Gerald Sussman 8 The Biology of Integration of Cells into Microscale and Nanoscale Systems 148 Michael L. Simpson, Timothy E. McKnight, Michael A. Guillor...

  18. Approach to upper gastrointestinal bleeding

    African Journals Online (AJOL)

    Benign ulcer. Mallory-Weiss tear .... pressure and direct thermal coagulation. Alternatively, use ... Forrest classification of peptic ulcer bleeding related to risks of rebleeding. (NBVV - non- .... esomeprazole for prevention of recurrent peptic ulcer ...

  19. Dissolved helium and TDS in groundwater from Bhavnagar in Gujarat

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    2003-01-02

    Jan 2, 2003 ... by enhanced pumping of old groundwater with relatively higher concentration of dissolved helium and salt .... solubility changes due to these (Weiss 1971) can- ... aquifers and relatively low helium concentra- .... permeability.

  20. Trinta dias na casa de tiros: o estranho caso do Dr. Edmund Forster e Adolf Hitler Thirty days at the shooting house: the strange case of Dr. Edmund Forster and Adolf Hitler

    Directory of Open Access Journals (Sweden)

    David Lewis

    2006-01-01

    Full Text Available Em 1918, em um hospital militar da reserva situado na pequena cidade pomerânia de Pasewalk, o neuropsiquiatra Prof. Edmund Forster tratou de um cabo austríaco chamado Adolf Hitler, que apresentava uma neurose de guerra (cegueira histérica, utilizando-se de técnicas terapêuticas sugestivas. Logo após a ascensão de Hitler ao poder na Alemanha nazista, em 1933, o Prof. Forster reuniu-se com um grupo de escritores alemães exilados em Paris e secretamente passou a eles informações sobre o caso. O escritor Ernst Weiss, também médico, mais tarde utilizou tais informações para escrever seu romance A Testemunha Ocular, que, porém, só seria publicado em 1963. Em circunstâncias estranhas, o Prof. Forster cometeu suicídio após uma campanha difamatória sistemática articulada pelos nazistas em 1933. Weiss também cometeu suicídio em 1940, quando as tropas alemãs invadiram Paris. A Gestapo também assassinou diversas outras pessoas envolvidas no prontuário médico de Hitler.In 1918, in a military reserve hospital located in the small pomeranian town of Pasewalk, the neuropsychiatrist Prof. Edmund Forster treated an Austrian caporal called Adolf Hitler, suffering from a war neurosis (hysterical blindness, by means of suggestive techniques. Soon after the Hitler's ascension to the power in the Nazi Germany, in 1933, Dr. Forster met with a group of exiled writers living in Paris and secretly gave them information about the case. The writer Ernst Weiss, also he a physician, latter used this information in order to produce his novel "The Eye Witness", which would be published only in 1963. In strange circumstances, Prof. Forster committed suicide after successive defamatory statements in 1933. Weiss also committed suicide in 1940, when German troops invaded Paris. The Gestapo murdered several other persons involved in the Hitler's medical chart.

  1. Evaluation of existing and proposed computer architectures for future ground-based systems

    Science.gov (United States)

    Schulbach, C.

    1985-01-01

    Parallel processing architectures and techniques used in current supercomputers are described and projections are made of future advances. Presently, the von Neumann sequential processing pattern has been accelerated by having separate I/O processors, interleaved memories, wide memories, independent functional units and pipelining. Recent supercomputers have featured single-input, multiple data stream architectures, which have different processors for performing various operations (vector or pipeline processors). Multiple input, multiple data stream machines have also been developed. Data flow techniques, wherein program instructions are activated only when data are available, are expected to play a large role in future supercomputers, along with increased parallel processor arrays. The enhanced operational speeds are essential for adequately treating data from future spacecraft remote sensing instruments such as the Thematic Mapper.

  2. Inversion of Flow Depth and Speed from Tsunami Deposits using TsuSedMod

    Science.gov (United States)

    Spiske, M.; Weiss, R.; Roskosch, J.; Bahlburg, H.

    2008-12-01

    The global evolution of a tsunami wave train can be expressed by the sum of local effects along a tsunami- wave beam. The near-shore evolution of tsunami is very complex as the waves interact with the sea-bottom sediments. Filtered through offshore and onshore erosion and deposition, this evolution is recorded in the coastal area by topographical changes, local erosion and tsunami deposits. Recordable sedimentary on-site features include grain-size distributions and horizontal thickness trends. Immediately after an event, indicators of flow depth and run up extent, such as water marks on buildings and vegetation, debris and plastic bags caught in trees and swash lines, can be measured in the field. A direct measurement of the overland flow velocity is usually not possible. However, regarding recent tsunami events, videos of surveillance cameras or witness accounts helped to estimate the characteristics of overland flow. For historical and paleotsunami events such information is not directly available. Jaffe & Gelfenbaum (2007) developed an inversion model (TsuSedMod) to estimate flow depth and speed based upon the grain-size distribution and the thickness of onshore tsunami sediments. This model assumes a steady distribution of sediment in the water column, for which the appication of the Rouse equation is possible. Further simplifications, especially concerning the turbulence structure, are based on the mixing- length theory by Prandtl, the standard approximation in physical sedimentology. We calculated flow depths for sediments left behind by the 2004 Sumatra-Tsunami in India and Kenya (Weiss & Bahlburg, 2006; Bahlburg & Weiss, 2007) and by the 2006 Java-Tsunami on Java (Piepenbreier et al., 2007), using the model of Jaffe and Gelfenbaum (2007). Estimated flow depth were compared with measured data to extend the validation procedure. This extension is needed to gain confidence and understanding before the next step is taken to compute the near

  3. Temperature-dependent magnetic properties of individual glass spherules, Apollo 11, 12, and 14 lunar samples.

    Science.gov (United States)

    Thorpe, A. N.; Sullivan, S.; Alexander, C. C.; Senftle, F. E.; Dwornik, E. J.

    1972-01-01

    Magnetic susceptibility of 11 glass spherules from the Apollo 14 lunar fines have been measured from room temperature to 4 K. Data taken at room temperature, 77 K, and 4.2 K, show that the soft saturation magnetization was temperature independent. In the temperature range 300 to 77 K the temperature-dependent component of the magnetic susceptibility obeys the Curie law. Susceptibility measurements on these same specimens and in addition 14 similar spherules from the Apollo 11 and 12 mission show a Curie-Weiss relation at temperatures less than 77 K with a Weiss temperature of 3-7 degrees in contrast to 2-3 degrees found for tektites and synthetic glasses of tektite composition. A proposed model and a theoretical expression closely predict the variation of the susceptibility of the glass spherules with temperature.

  4. [Teacher enhancement at Supercomputing `96

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-02-13

    The SC`96 Education Program provided a three-day professional development experience for middle and high school science, mathematics, and computer technology teachers. The program theme was Computers at Work in the Classroom, and a majority of the sessions were presented by classroom teachers who have had several years experience in using these technologies with their students. The teachers who attended the program were introduced to classroom applications of computing and networking technologies and were provided to the greatest extent possible with lesson plans, sample problems, and other resources that could immediately be used in their own classrooms. The attached At a Glance Schedule and Session Abstracts describes in detail the three-day SC`96 Education Program. Also included is the SC`96 Education Program evaluation report and the financial report.

  5. Mantle Convection on Modern Supercomputers

    Science.gov (United States)

    Weismüller, J.; Gmeiner, B.; Huber, M.; John, L.; Mohr, M.; Rüde, U.; Wohlmuth, B.; Bunge, H. P.

    2015-12-01

    Mantle convection is the cause for plate tectonics, the formation of mountains and oceans, and the main driving mechanism behind earthquakes. The convection process is modeled by a system of partial differential equations describing the conservation of mass, momentum and energy. Characteristic to mantle flow is the vast disparity of length scales from global to microscopic, turning mantle convection simulations into a challenging application for high-performance computing. As system size and technical complexity of the simulations continue to increase, design and implementation of simulation models for next generation large-scale architectures is handled successfully only in an interdisciplinary context. A new priority program - named SPPEXA - by the German Research Foundation (DFG) addresses this issue, and brings together computer scientists, mathematicians and application scientists around grand challenges in HPC. Here we report from the TERRA-NEO project, which is part of the high visibility SPPEXA program, and a joint effort of four research groups. TERRA-NEO develops algorithms for future HPC infrastructures, focusing on high computational efficiency and resilience in next generation mantle convection models. We present software that can resolve the Earth's mantle with up to 1012 grid points and scales efficiently to massively parallel hardware with more than 50,000 processors. We use our simulations to explore the dynamic regime of mantle convection and assess the impact of small scale processes on global mantle flow.

  6. Advanced Architectures for Astrophysical Supercomputing

    Science.gov (United States)

    Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.

    2010-12-01

    Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.

  7. Supercomputer debugging workshop '92

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.S.

    1993-01-01

    This report contains papers or viewgraphs on the following topics: The ABCs of Debugging in the 1990s; Cray Computer Corporation; Thinking Machines Corporation; Cray Research, Incorporated; Sun Microsystems, Inc; Kendall Square Research; The Effects of Register Allocation and Instruction Scheduling on Symbolic Debugging; Debugging Optimized Code: Currency Determination with Data Flow; A Debugging Tool for Parallel and Distributed Programs; Analyzing Traces of Parallel Programs Containing Semaphore Synchronization; Compile-time Support for Efficient Data Race Detection in Shared-Memory Parallel Programs; Direct Manipulation Techniques for Parallel Debuggers; Transparent Observation of XENOOPS Objects; A Parallel Software Monitor for Debugging and Performance Tools on Distributed Memory Multicomputers; Profiling Performance of Inter-Processor Communications in an iWarp Torus; The Application of Code Instrumentation Technology in the Los Alamos Debugger; and CXdb: The Road to Remote Debugging.

  8. ASCI's Vision for supercomputing future

    International Nuclear Information System (INIS)

    Nowak, N.D.

    2003-01-01

    The full text of publication follows. Advanced Simulation and Computing (ASC, formerly Accelerated Strategic Computing Initiative [ASCI]) was established in 1995 to help Defense Programs shift from test-based confidence to simulation-based confidence. Specifically, ASC is a focused and balanced program that is accelerating the development of simulation capabilities needed to analyze and predict the performance, safety, and reliability of nuclear weapons and certify their functionality - far exceeding what might have been achieved in the absence of a focused initiative. To realize its vision, ASC is creating simulation and proto-typing capabilities, based on advanced weapon codes and high-performance computing

  9. Supercomputer requirements for theoretical chemistry

    International Nuclear Information System (INIS)

    Walker, R.B.; Hay, P.J.; Galbraith, H.W.

    1980-01-01

    Many problems important to the theoretical chemist would, if implemented in their full complexity, strain the capabilities of today's most powerful computers. Several such problems are now being implemented on the CRAY-1 computer at Los Alamos. Examples of these problems are taken from the fields of molecular electronic structure calculations, quantum reactive scattering calculations, and quantum optics. 12 figures

  10. Development and evaluation of a tropical feed library for the Cornell Net Carbohydrate and Rrotein System model Desenvolvimento e avaliação de uma biblioteca de alimentos tropicais para o modelo "Sistema de Carboidrato e Proteína Líquidos" da Universidade de Cornell

    Directory of Open Access Journals (Sweden)

    Luís Orlindo Tedeschi

    2002-03-01

    Full Text Available The Cornell Net Carbohydrate and Protein System (CNCPS model has been increasingly used in tropical regions for dairy and beef production. However, the lack of appropriate characterization of the feeds has restricted its application. The objective of this study was to develop and evaluate a feed library containing feeds commonly used in tropical regions with characteristics needed as inputs for the CNCPS. Feed composition data collected from laboratory databases and from experiments published in scientific journals were used to develop this tropical feed library. The total digestible nutrients (TDN predicted at 1x intake of maintenance requirement with the CNCPS model agreed with those predicted by the Weiss et al. (1992 equation (r² of 92.7%, MSE of 13, and bias of 0.8% over all feeds. However, the regression r² of the tabular TDN values and the TDN predicted by the CNCPS model or with the Weiss equation were much lower (58.1 and 67.5%, respectively. A thorough comparison between observed and predicted TDN was not possible because of insufficient data to characterize the feeds as required by our models. When we used the mean chemical composition values from the literature data, the TDN predicted by our models did not agree with the measured values. We conclude using the TDN values calculated using the Weiss equation and the CNCPS model that are based on the actual chemical composition of the feeds result in energy values that more accurately represent the feeds being used in specific production situations than do the tabular values. Few papers published in Latin America journals that were used in this study reported information need by models such as the CNCPS.O uso do Sistema de Carboidrato e Proteina Líquidos da Universidade de Cornell (CNCPS tanto para produção de leite como carne tem aumentado durante o últimos anos nas regiões tropicais. Entretanto, a falta de uma caracterização adequada de alimentos tem restringido o seu uso

  11. A portable grid-enabled computing system for a nuclear material study

    International Nuclear Information System (INIS)

    Tsujita, Yuichi; Arima, Tatsumi; Takekawa, Takayuki; Suzuki, Yoshio

    2010-01-01

    We have built a portable grid-enabled computing system specialized for our molecular dynamics (MD) simulation program to study Pu material easily. Experimental approach to reveal properties of Pu materials is often accompanied by some difficulties such as radiotoxicity of actinides. Since a computational approach reveals new aspects to researchers without such radioactive facilities, we address an MD computation. In order to have more realistic results about e.g., melting point or thermal conductivity, we need a large scale of parallel computations. Most of application users who don't have supercomputers in their institutes should use a remote supercomputer. For such users, we have developed the portable and secured grid-enabled computing system to utilize a grid computing infrastructure provided by Information Technology Based Laboratory (ITBL). This system enables us to access remote supercomputers in the ITBL system seamlessly from a client PC through its graphical user interface (GUI). Typically it enables seamless file accesses on the GUI. Furthermore monitoring of standard output or standard error is available to see progress of an executed program. Since the system provides fruitful functionalities which are useful for parallel computing on a remote supercomputer, application users can concentrate on their researches. (author)

  12. Subcutaneous Implantable Cardioverter-Defibrillator

    Science.gov (United States)

    ... discriminator functions and lacks antitachycardia pacing. Expanded Programmability Programming that allows lower shock energies and the ability ... 2014 American Heart Association, Inc. References 1. ↵ Weiss R , Knight BP , Gold MR , Leon AR , Herre JM , ...

  13. Healthy Movements: Your Body's Mechanics

    Science.gov (United States)

    ... body, are governed by the same basic physical laws,” says Dr. Jeffrey Weiss, a biomechanics expert at ... for movement disorders such as cerebral palsy and Parkinson’s disease. Joints are a common source of problems ...

  14. Development of in-situ visualization tool for PIC simulation

    International Nuclear Information System (INIS)

    Ohno, Nobuaki; Ohtani, Hiroaki

    2014-01-01

    As the capability of a supercomputer is improved, the sizes of simulation and its output data also become larger and larger. Visualization is usually carried out on a researcher's PC with interactive visualization software after performing the computer simulation. However, the data size is becoming too large to do it currently. A promising answer is in-situ visualization. For this case a simulation code is coupled with the visualization code and visualization is performed with the simulation on the same supercomputer. We developed an in-situ visualization tool for particle-in-cell (PIC) simulation and it is provided as a Fortran's module. We coupled it with a PIC simulation code and tested the coupled code on Plasma Simulator supercomputer, and ensured that it works. (author)

  15. Surgical treatment of necrotic panophthalmitis in snakes

    African Journals Online (AJOL)

    two toothed Weiss iris forceps, curved; two grooved 15 em dissecting forceps; ... duced is of rather short duration, and they cannot easily be administered ... Sagatal (pentobarbitone sodium) has proved to be the drug of choice for most major.

  16. Fulltext PDF

    Indian Academy of Sciences (India)

    stage when we can regard our methodology to be slightly more reliable than betting .... been corrected by feeding the observational data of polar fields into the theoretical model. .... Tobias, S., Hughes, D., Weiss, N. 2006, Nature, 442, 26.

  17. HIV status disclosure rate and reasons for non-disclosure among ...

    African Journals Online (AJOL)

    2016-09-01

    Sep 1, 2016 ... K.K. Ilohc, I.J. Emodide, N.S. Ibeziakofg, I.N. Obumneme-Anyimh, O.N. Ilohi,. A.C. Ayukj ..... 2004; Grubman, Gross, Lerner-Weiss, Hernandez, .... Receiving Care at Kilimanjaro Christian Medical Centre in Moshi, Tanzania.

  18. Disease: H01017 [KEGG MEDICUS

    Lifescience Database Archive (English)

    Full Text Available H01017 Choanal atresia and lymphedema Choanal atresia and lymphoedema is a rare co...orderon ML, Weiss MH ... TITLE ... Choanal atresia and lymphedema. ... JOURNAL ... Ann Otol Rhinol Laryngol 100:661-4 (1991) DOI:10.1177/000348949110000812 ...

  19. Aviation Research and the Internet

    Science.gov (United States)

    Scott, Antoinette M.

    1995-01-01

    The Internet is a network of networks. It was originally funded by the Defense Advanced Research Projects Agency or DOD/DARPA and evolved in part from the connection of supercomputer sites across the United States. The National Science Foundation (NSF) made the most of their supercomputers by connecting the sites to each other. This made the supercomputers more efficient and now allows scientists, engineers and researchers to access the supercomputers from their own labs and offices. The high speed networks that connect the NSF supercomputers form the backbone of the Internet. The World Wide Web (WWW) is a menu system. It gathers Internet resources from all over the world into a series of screens that appear on your computer. The WWW is also a distributed. The distributed system stores data information on many computers (servers). These servers can go out and get data when you ask for it. Hypermedia is the base of the WWW. One can 'click' on a section and visit other hypermedia (pages). Our approach to demonstrating the importance of aviation research through the Internet began with learning how to put pages on the Internet (on-line) ourselves. We were assigned two aviation companies; Vision Micro Systems Inc. and Innovative Aerodynamic Technologies (IAT). We developed home pages for these SBIR companies. The equipment used to create the pages were the UNIX and Macintosh machines. HTML Supertext software was used to write the pages and the Sharp JX600S scanner to scan the images. As a result, with the use of the UNIX, Macintosh, Sun, PC, and AXIL machines, we were able to present our home pages to over 800,000 visitors.

  20. Fast methods for long-range interactions in complex systems. Lecture notes

    Energy Technology Data Exchange (ETDEWEB)

    Sutmann, Godehard; Gibbon, Paul; Lippert, Thomas (eds.)

    2011-10-13

    Parallel computing and computer simulations of complex particle systems including charges have an ever increasing impact in a broad range of fields in the physical sciences, e.g. in astrophysics, statistical physics, plasma physics, material sciences, physical chemistry, and biophysics. The present summer school, funded by the German Heraeus-Foundation, took place at the Juelich Supercomputing Centre from 6 - 10 September 2010. The focus was on providing an introduction and overview over different methods, algorithms and new trends for the computational treatment of long-range interactions in particle systems. The Lecture Notes contain an introduction into particle simulation, as well as five different fast methods, i.e. the Fast Multipole Method, Barnes-Hut Tree Method, Multigrid, FFT based methods, and Fast Summation using the non-equidistant FFT. In addition to introducing the methods, efficient parallelization of the methods is presented in detail. This publication was edited at the Juelich Supercomputing Centre (JSC) which is an integral part of the Institute for Advanced Simulation (IAS). The IAS combines the Juelich simulation sciences and the supercomputer facility in one organizational unit. It includes those parts of the scientific institutes at Forschungszentrum Juelich which use simulation on supercomputers as their main research methodology. (orig.)

  1. Fast methods for long-range interactions in complex systems. Lecture notes

    International Nuclear Information System (INIS)

    Sutmann, Godehard; Gibbon, Paul; Lippert, Thomas

    2011-01-01

    Parallel computing and computer simulations of complex particle systems including charges have an ever increasing impact in a broad range of fields in the physical sciences, e.g. in astrophysics, statistical physics, plasma physics, material sciences, physical chemistry, and biophysics. The present summer school, funded by the German Heraeus-Foundation, took place at the Juelich Supercomputing Centre from 6 - 10 September 2010. The focus was on providing an introduction and overview over different methods, algorithms and new trends for the computational treatment of long-range interactions in particle systems. The Lecture Notes contain an introduction into particle simulation, as well as five different fast methods, i.e. the Fast Multipole Method, Barnes-Hut Tree Method, Multigrid, FFT based methods, and Fast Summation using the non-equidistant FFT. In addition to introducing the methods, efficient parallelization of the methods is presented in detail. This publication was edited at the Juelich Supercomputing Centre (JSC) which is an integral part of the Institute for Advanced Simulation (IAS). The IAS combines the Juelich simulation sciences and the supercomputer facility in one organizational unit. It includes those parts of the scientific institutes at Forschungszentrum Juelich which use simulation on supercomputers as their main research methodology. (orig.)

  2. Variation in habitat choice and delayed reproduction: Adaptive queuing strategies or individual quality differences?

    NARCIS (Netherlands)

    Van de Pol, M.; Pen, I.; Heg, D.; Weissing, F.J.

    2007-01-01

    In most species, some individuals delay reproduction or occupy inferior breeding positions. The queue hypothesis tries to explain both patterns by proposing that individuals strategically delay breeding (queue) to acquire better breeding or social positions. In 1995, Ens, Weissing, and Drent

  3. 75 FR 81624 - Advisory Committee on Interdisciplinary, Community-Based Linkages; Notice of Meeting

    Science.gov (United States)

    2010-12-28

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Health Resources and Service Administration Advisory... recommendations for improvement of these training programs to the Secretary and the Congress. Agenda: The ACICBL... Weiss, Designated Federal Official within the Bureau of Health Professions, Health Resources and...

  4. Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Kozacik, Stephen [EM Photonics, Inc., Newark, DE (United States)

    2017-05-15

    Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.

  5. Eesti klaasikunstnik lummab ameeriklasi / Meeli Kõiva ; interv. Robert Vaks

    Index Scriptorium Estoniae

    Kõiva, Meeli

    1998-01-01

    Meeli Kõiva, kellega intervjuu ja kelle vitraažide fotod ilmusid Ameerika Ühendriikide vitraažikunstnike Assotsiatsiooni ajakirja "Stained Glass" talvenumbris (Helene Weiss "Unistused transparentsest ruumist") vitraažist, selle kasutamisest Ameerikas ja Eestis, oma tööde lähtekohtadest.

  6. PATTERNS OF SEVEN AND COMPLICATED MALARIA IN CHILDREN

    African Journals Online (AJOL)

    GB

    2017-01-01

    Jan 1, 2017 ... A random effect meta-analysis was conducted on crude MetS prevalence rates. ... diabetes among people with MetS is a fivefold (4). The clustering of CVD risk ..... 2004;53(8):2087-94. 11. Weiss R, Bremer AA, Lustig RH.

  7. A Random Assignment Evaluation of Learning Communities Seven Years Later: Impacts on Education and Earnings Outcomes

    Science.gov (United States)

    Weiss, Michael J.; Mayer, Alexander; Cullinan, Dan; Ratledge, Alyssa; Sommo, Colleen; Diamond, John

    2014-01-01

    Empirical evidence confirms that increased education is positively associated with higher earnings across a wide spectrum of fields and student demographics (Barrow & Rouse, 2005; Card, 2001; Carneiro, Heckman, & Vytlacil, 2011; Dadgar & Weiss, 2012; Dynarski, 2008; Jacobson & Mokher, 2009; Jepsen, Troske, & Coomes, 2009; Kane…

  8. Download this PDF file

    African Journals Online (AJOL)

    Mr Olusoji

    Christian. Islam. 107 (63.3 %). 75(57.7%). 62 (36.7%). 55 (42.3%). Tribes. Yoruba. Igbo. Hausa ..... Abu-Heiji A al Chalabi H, el Iloubani N Abruptio placentae: risk ... Johana Weiss, MD, & Ramada S. Smith, MD, Critical. Care Obstetircs inb ...

  9. Transitions in turbulent rotating convection: A Lagrangian perspective : A Lagrangian perspective

    NARCIS (Netherlands)

    Rajaei, H.; Joshi, P.R.; Alards, K.M.J.; Kunnen, R.P.J.; Toschi, F.; Clercx, H.J.H.

    2016-01-01

    Using measurements of Lagrangian acceleration in turbulent rotating convection and accompanying direct numerical simulations, we show that the transition between turbulent states reported earlier [e.g., S. Weiss et al., Phys. Rev. Lett. 105, 224501 (2010)] is a boundary-layer transition between the

  10. Application of 't Hooft's renormalization scheme to two-loop calculations 230

    International Nuclear Information System (INIS)

    Vladimirov, A.A.

    1975-01-01

    The advantages of the Hooft scheme for asymptotic calculations in the renormalization group have been demonstrated. Two-loop calculations have been carried out in three renormalized models: in scalar electrodynamics, in a pseudoscalar Yukawa theory and in the Weiss-Zumino supersymmetrical model [ru

  11. Theoretical model for thin ferroelectric films and the multilayer structures based on them

    Energy Technology Data Exchange (ETDEWEB)

    Starkov, A. S., E-mail: starkov@iue.tuwien.ac.at; Pakhomov, O. V. [St. Petersburg National Research Univeristy ITMO, Institute of Refrigeration and Biotechnologies (Russian Federation); Starkov, I. A. [Vienna University of Technology, Institute for Microelectronics (Austria)

    2013-06-15

    A modified Weiss mean-field theory is used to study the dependence of the properties of a thin ferroelectric film on its thickness. The possibility of introducing gradient terms into the thermodynamic potential is analyzed using the calculus of variations. An integral equation is introduced to generalize the well-known Langevin equation to the case of the boundaries of a ferroelectric. An analysis of this equation leads to the existence of a transition layer at the interface between ferroelectrics or a ferroelectric and a dielectric. The permittivity of this layer is shown to depend on the electric field direction even if the ferroelectrics in contact are homogeneous. The results obtained in terms of the Weiss model are compared with the results of the models based on the correlation effect and the presence of a dielectric layer at the boundary of a ferroelectric and with experimental data.

  12. Theoretical model for thin ferroelectric films and the multilayer structures based on them

    International Nuclear Information System (INIS)

    Starkov, A. S.; Pakhomov, O. V.; Starkov, I. A.

    2013-01-01

    A modified Weiss mean-field theory is used to study the dependence of the properties of a thin ferroelectric film on its thickness. The possibility of introducing gradient terms into the thermodynamic potential is analyzed using the calculus of variations. An integral equation is introduced to generalize the well-known Langevin equation to the case of the boundaries of a ferroelectric. An analysis of this equation leads to the existence of a transition layer at the interface between ferroelectrics or a ferroelectric and a dielectric. The permittivity of this layer is shown to depend on the electric field direction even if the ferroelectrics in contact are homogeneous. The results obtained in terms of the Weiss model are compared with the results of the models based on the correlation effect and the presence of a dielectric layer at the boundary of a ferroelectric and with experimental data

  13. Theoretical model for thin ferroelectric films and the multilayer structures based on them

    Science.gov (United States)

    Starkov, A. S.; Pakhomov, O. V.; Starkov, I. A.

    2013-06-01

    A modified Weiss mean-field theory is used to study the dependence of the properties of a thin ferroelectric film on its thickness. The possibility of introducing gradient terms into the thermodynamic potential is analyzed using the calculus of variations. An integral equation is introduced to generalize the well-known Langevin equation to the case of the boundaries of a ferroelectric. An analysis of this equation leads to the existence of a transition layer at the interface between ferroelectrics or a ferroelectric and a dielectric. The permittivity of this layer is shown to depend on the electric field direction even if the ferroelectrics in contact are homogeneous. The results obtained in terms of the Weiss model are compared with the results of the models based on the correlation effect and the presence of a dielectric layer at the boundary of a ferroelectric and with experimental data.

  14. Algorithm comparison and benchmarking using a parallel spectra transform shallow water model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)

    1995-04-01

    In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.

  15. Gebruik van informatie bij investeringen in infrastructuur : literatuurstudie en onderzoeksopzet

    NARCIS (Netherlands)

    Bax, C.A.

    2007-01-01

    Use of information when investing in infrastructure; Literature study and research design (Inter)national literature shows that government bodies do not always use scientific knowledge in the decision making process (Weiss, 1977; In 't Veld, 2000). This also applies to road safety research (Elvik,

  16. Author Details

    African Journals Online (AJOL)

    Badenhorst-Weiss, JA. Vol 15, No 1 (2011) - Articles Supply chain management problems at South African automotive component manufacturers. Abstract PDF · Vol 18, No 3 (2014) - Articles Competitive advantage of independent small businesses in Soweto Abstract PDF. ISSN: 1998-8125. AJOL African Journals Online.

  17. "-o C1) (Q

    African Journals Online (AJOL)

    edm deur die Franse Protestamse vlugtelinge. Wat die verstrekte inligting betref, is hoofsaaklik van Charles Weiss se reeds genoemde publika~ sies gebruik gemaak. (a) Brandenburg (na 1701 die koninkryk Pruise). In Brandenburg het keurvors Frederik Willem. (1640-1688) in 1640 die troon bestyg. Deur oorlogsgeweld.

  18. Trichloro(1,4,7-trimethyl-1,4,7-triazacyclononane)chromium(III)

    DEFF Research Database (Denmark)

    Klitgaard, Søren Kegnaes; Schau-Magnussen, Magnus

    2005-01-01

    The 1,4,7-trimethyl-1,4,7-triazacyclononane (tmtacn) ligand has become one of the classic ligands in coordination chemistry (Wieghardt et al., 1982 [Wieghardt, K., Chaudhuri, P., Nuber, B. & Weiss, J. (1982). Inorg. Chem. 21, 3086-3090.] ). In recent years, tmtacn-metal complexes ...

  19. Metamagnetism in Ce(Ga,Al)2

    Indian Academy of Sciences (India)

    1Department of Physics, Indian Institute of Technology, Mumbai 400 076, India. 2Department of ... The Curie–Weiss law fit of the high temperature .... One of the authors, KGS would like to thank the B.R.N.S. (D.A.E.) for financial support.

  20. Auto-Baecklund transformation and similarity reductions to the variable coefficients variant Boussinesq system

    Energy Technology Data Exchange (ETDEWEB)

    Moussa, M.H.M. [Department of Mathematic, Faculty of Education, Ain Shams University, Roxy, Hiliopolis, Cairo (Egypt)], E-mail: m_h_m_moussa@yahoo.com; El Shikh, Rehab M. [Department of Mathematic, Faculty of Education, Ain Shams University, Roxy, Hiliopolis, Cairo (Egypt)

    2008-02-25

    Based on the closed connections among the homogeneous balance (HB) method, Weiss-Tabor-Carneval (WTC) method and Clarkson-Kruskal (CK) method, we study Baecklund transformation and similarity reductions of the variable coefficients variant Boussinesq system. In the meantime, new exact solutions also are found.

  1. Polskoje iskusstvo dlja nesvedushtshih

    Index Scriptorium Estoniae

    2007-01-01

    Kumus avatavast näitusest "Metafoor ja müüt. Kirjanduslikud ja ajaloolised motiivid 19. ja 20. sajandi vahetuse Poola kunstis". Kuraator on Adam Organisty. Näidatakse maale, joonistusi, akvarelle, skulptuure, portselani. Esindatud on poola klassikud Jan Matejko, Stanislaw Wyspianski, Jacek Malczewski, Wojciech Weiss

  2. Download this PDF file

    African Journals Online (AJOL)

    Mr Olusoji

    today, (as it was in thc days of Scm mel weiss, Lister Ana<'robic cocci. Closlridiu", per/ringe .•. .... With the degree 1884, by the Danish physician Hans Christian of precision cXpe<:ted. ..... Alr.:ram H, Sbamsuzzaman AKM I. AsmaASI. Zahura B ...

  3. Delivering Training Assessments in a Soldier-Centered Learning Environment: Year Two

    Science.gov (United States)

    2015-12-01

    reduces the efficiency of the CAT (e.g., Kingsbury & Zara ; 1991; Weiss, 2004). 27 EXPERIMENT 3: THE EFFECTS OF PERIODIC TESTING DURING...on Technology in Education, 45 (1), 61-82. Kingsbury, C. G., & Zara , A. R. (1991). A comparison of procedures for content-sensitive item

  4. Parallel-Vector Algorithm For Rapid Structural Anlysis

    Science.gov (United States)

    Agarwal, Tarun R.; Nguyen, Duc T.; Storaasli, Olaf O.

    1993-01-01

    New algorithm developed to overcome deficiency of skyline storage scheme by use of variable-band storage scheme. Exploits both parallel and vector capabilities of modern high-performance computers. Gives engineers and designers opportunity to include more design variables and constraints during optimization of structures. Enables use of more refined finite-element meshes to obtain improved understanding of complex behaviors of aerospace structures leading to better, safer designs. Not only attractive for current supercomputers but also for next generation of shared-memory supercomputers.

  5. Scalability of DL_POLY on High Performance Computing Platform

    CSIR Research Space (South Africa)

    Mabakane, Mabule S

    2017-12-01

    Full Text Available stream_source_info Mabakanea_19979_2017.pdf.txt stream_content_type text/plain stream_size 33716 Content-Encoding UTF-8 stream_name Mabakanea_19979_2017.pdf.txt Content-Type text/plain; charset=UTF-8 SACJ 29(3) December... when using many processors within the compute nodes of the supercomputer. The type of the processors of compute nodes and their memory also play an important role in the overall performance of the parallel application running on a supercomputer. DL...

  6. Piirid tugevnevad segases maailmas / Kristjan Jaak Kangur

    Index Scriptorium Estoniae

    Kangur, Kristjan Jaak

    2002-01-01

    6. Pimedate Ööde Filmifestivalil näidatud filme : Prantsusmaa - Palestiina - Maroko - Saksamaa "Jumalik sekkumine" (režissöör Elia Suleiman), Sloveenia - Saksamaa "Piiri valvur" (režissöör Maja Weiss) ja Austraalia "Eraldustara" (režissöör Phillip Noyce)

  7. Effects of reaction temperature on size and optical properties of ...

    Indian Academy of Sciences (India)

    Administrator

    influential factors in shape control of CdSe nanocrystals by changing the ratio of .... four different temperatures (200, 220, 240 and 280°C). During the whole .... J, Wu A M, Gambhir S S and Weiss S 2005 Science 307 538. Murray C B, Norris ...

  8. Consistency of the Takens estimator for the correlation dimension

    NARCIS (Netherlands)

    Borovkova, S.; Burton, Robert; Dehling, H.

    Motivated by the problem of estimating the fractal dimension of a strange attractor, we prove weak consistency of U-statistics for stationary ergodic and mixing sequences when the kernel function is unbounded, extending by this earlier results of Aaronson, Burton, Dehling, Gilat, Hill and Weiss. We

  9. A la industria colombiana le falta un tornillo

    Directory of Open Access Journals (Sweden)

    Mauricio Perfetti del Corral

    1995-01-01

    Full Text Available La empresa colombiana entre la tecnocracia y la participación. Del taylorismo a la calidad total. Anita Weiss (prólogo de Darlo Mesa. Universidad Nacional de Colombia, Departamento de Sociología, Santafé de Bogotá, 1994, 208 págs.

  10. SDI Software Technology Program Plan Version 1.5

    Science.gov (United States)

    1987-06-01

    Display Generator ( SDG ) [Patterson 83] SDG supports the creation, display, modification, storage, and retrieval of components of simulation models via... Compass 󈨚, Washington D.C. July 7-11, 1986. [Parnas 85] Parnas, D.L., and Weiss, D.M., Active Design Reviews: Principles and Practices, NRL Report

  11. A novel hydrogen-bonded cyclic dibromide in an organic ...

    Indian Academy of Sciences (India)

    Unknown

    2Institut für Anorganische Chemie, Christian-Albrechts-Universität Kiel, Olshausenstraße 40, D-24098 ... H2O molecules are linked to bromide anions via O–H⋅⋅⋅Br hydrogen bonding ..... Weiss R, Reichel S, Handlke M and Hampel F 1998.

  12. Strategic research field no.4, industrial innovations

    International Nuclear Information System (INIS)

    Kato, Chisachi

    2011-01-01

    'Kei'-supercomputer is planned to start its full-scale operation in about one year and a half. With this, High Performance Computing (HPC) is most likely to contribute not only to further progress in basic and applied sciences, but also to bringing about innovations in various fields of industries. It is expected to substantially shorten design time, drastically improve performance and/or liability of various industrial products, and greatly enhance safety of large-scale power plants. In this particle, six research themes, which are currently being prepared in this strategic research field, 'industrial innovations' so as to use 'Kei'-supercomputer as soon as it starts operations, will be briefly described regarding their specific goals and break-through that they are expected to bring about in industries. It is also explained how we have determined these themes. We are also planning several measures in order to promote widespread use of HPC including 'Kei'-supercomputer in industries, which will also be elaborated in this article. (author)

  13. Distributed interactive graphics applications in computational fluid dynamics

    International Nuclear Information System (INIS)

    Rogers, S.E.; Buning, P.G.; Merritt, F.J.

    1987-01-01

    Implementation of two distributed graphics programs used in computational fluid dynamics is discussed. Both programs are interactive in nature. They run on a CRAY-2 supercomputer and use a Silicon Graphics Iris workstation as the front-end machine. The hardware and supporting software are from the Numerical Aerodynamic Simulation project. The supercomputer does all numerically intensive work and the workstation, as the front-end machine, allows the user to perform real-time interactive transformations on the displayed data. The first program was written as a distributed program that computes particle traces for fluid flow solutions existing on the supercomputer. The second is an older post-processing and plotting program modified to run in a distributed mode. Both programs have realized a large increase in speed over that obtained using a single machine. By using these programs, one can learn quickly about complex features of a three-dimensional flow field. Some color results are presented

  14. An evaluation of current high-performance networks

    Energy Technology Data Exchange (ETDEWEB)

    Bell, Christian; Bonachea, Dan; Cote, Yannick; Duell, Jason; Hargrove, Paul; Husbands, Parry; Iancu, Costin; Welcome, Michael; Yelick, Katherine

    2003-01-25

    High-end supercomputers are increasingly built out of commodity components, and lack tight integration between the processor and network. This often results in inefficiencies in the communication subsystem, such as high software overheads and/or message latencies. In this paper we use a set of microbenchmarks to quantify the cost of this commoditization, measuring software overhead, latency, and bandwidth on five contemporary supercomputing networks. We compare the performance of the ubiquitous MPI layer to that of lower-level communication layers, and quantify the advantages of the latter for small message performance. We also provide data on the potential for various communication-related optimizations, such as overlapping communication with computation or other communication. Finally, we determine the minimum size needed for a message to be considered 'large' (i.e., bandwidth-bound) on these platforms, and provide historical data on the software overheads of a number of supercomputers over the past decade.

  15. Supercomputer modeling of volcanic eruption dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Kieffer, S.W. [Arizona State Univ., Tempe, AZ (United States); Valentine, G.A. [Los Alamos National Lab., NM (United States); Woo, Mahn-Ling [Arizona State Univ., Tempe, AZ (United States)

    1995-06-01

    Our specific goals are to: (1) provide a set of models based on well-defined assumptions about initial and boundary conditions to constrain interpretations of observations of active volcanic eruptions--including movies of flow front velocities, satellite observations of temperature in plumes vs. time, and still photographs of the dimensions of erupting plumes and flows on Earth and other planets; (2) to examine the influence of subsurface conditions on exit plane conditions and plume characteristics, and to compare the models of subsurface fluid flow with seismic constraints where possible; (3) to relate equations-of-state for magma-gas mixtures to flow dynamics; (4) to examine, in some detail, the interaction of the flowing fluid with the conduit walls and ground topography through boundary layer theory so that field observations of erosion and deposition can be related to fluid processes; and (5) to test the applicability of existing two-phase flow codes for problems related to the generation of volcanic long-period seismic signals; (6) to extend our understanding and simulation capability to problems associated with emplacement of fragmental ejecta from large meteorite impacts.

  16. Trends in supercomputers and computational physics

    International Nuclear Information System (INIS)

    Bloch, T.

    1985-01-01

    Today, scientists using numerical models explore the basic mechanisms of semiconductors, apply global circulation models to climatic and oceanographic problems, probe into the behaviour of galaxies and try to verify basic theories of matter, such as Quantum Chromo Dynamics by simulating the constitution of elementary particles. Chemists, crystallographers and molecular dynamics researchers develop models for chemical reactions, formation of crystals and try to deduce the chemical properties of molecules as a function of the shapes of their states. Chaotic systems are studied extensively in turbulence (combustion included) and the design of the next generation of controlled fusion devices relies heavily on computational physics. (orig./HSI)

  17. A supercomputer for parallel data analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    The project of a powerful multiprocessor system is proposed. The main purpose of the project is to develop a low cost computer system with a processing rate of a few tens of millions of operations per second. The system solves many problems of data analysis from high-energy physics spectrometers. It includes about 70 MOTOROLA-68020 based powerful slave microprocessor boards liaisoned through the VME crates to a host VAX micro computer. Each single microprocessor board performs the same algorithm requiring large computing time. The host computer distributes data over the microprocessor board, collects and combines obtained results. The architecture of the system easily allows one to use it in the real time mode

  18. Fluctuation relations for equilibrium states with broken discrete or continuous symmetries

    International Nuclear Information System (INIS)

    Lacoste, D; Gaspard, P

    2015-01-01

    Isometric fluctuation relations are deduced for the fluctuations of the order parameter in equilibrium systems of condensed-matter physics with broken discrete or continuous symmetries. These relations are similar to their analogues obtained for non-equilibrium systems where the broken symmetry is time reversal. At equilibrium, these relations show that the ratio of the probabilities of opposite fluctuations goes exponentially with the symmetry-breaking external field and the magnitude of the fluctuations. These relations are applied to the Curie–Weiss, Heisenberg, and XY models of magnetism where the continuous rotational symmetry is broken, as well as to the q-state Potts model and the p-state clock model where discrete symmetries are broken. Broken symmetries are also considered in the anisotropic Curie–Weiss model. For infinite systems, the results are calculated using large-deviation theory. The relations are also applied to mean-field models of nematic liquid crystals where the order parameter is tensorial. Moreover, their extension to quantum systems is also deduced. (paper)

  19. Ferromagnetic-phase transition in the spinel-type CuCr2Te4

    International Nuclear Information System (INIS)

    Suzuyama, Takeshi; Awaka, Junji; Yamamoto, Hiroki; Ebisu, Shuji; Ito, Masakazu; Suzuki, Takashi; Nakama, Takao; Yagasaki, Katsuma; Nagata, Shoichi

    2006-01-01

    Ferromagnetic-phase transition in spinel-type CuCr 2 Te 4 has been clearly observed. CuCr 2 Te 4 is a telluride-spinel with the lattice constant a=11.134A, which has been synthesized successfully. The heat capacity exhibits a sharp peak due to the ferromagnetic-phase transition with the Curie temperature T C =326K. This value of T C corresponds exactly to that of the negative peak of dM/dT in low field of 1.0Oe. The magnetic susceptibility shows the Curie-Weiss behavior between 380 and 650K with the effective magnetic moment μ eff =4.14μ B /Cr-ion and the Weiss constant θ=+357K. The low temperature magnetization indicates the spin-wave excitations, where the existence of first term of Bloch T 3/2 law and the next T 5/2 term are verified experimentally. This spin-wave excitation is detected up to approximately 250K which is a fairly high temperature

  20. A laboratory based x-ray reflectivity system

    International Nuclear Information System (INIS)

    Holt, S.A.; Creagh, D.C.; Jamie, I.M.; Dowling, T.L.; Brown, A.S.

    1996-01-01

    Full text: X-ray Reflectivity (XRR) over the last decade has proved to be a versatile and powerful technique by which the thickness of thin films, surface roughness and interface roughness can be determined. The systems amenable to study range from organic monolayers (liquid or solid substrates) to layered metal or semiconductor systems. Access to XRR has been limited by the requirement for synchrotron radiation sources. The development of XRR systems for the laboratory environment was pioneered by Weiss. An X-ray Reflectometer has been constructed by the Department of Physics (Australian Defence Force Academy) and the Research School of Chemistry (Australian National University). The general principles of the design were similar to those described by Weiss. The reflectometer is currently in the early stages of commissioning, with encouraging results thus far. The diffraction pattern of Mobil Catalytic Material (MCM), consisting primarily of SiO 2 . The poster will describe the reflectometer, its operation and present a summary of the most important results obtained to date

  1. Gedichte aus den Jahren 1968 bis 1975 / Viivi Luik ; tõlk. Gisbert Jänicke

    Index Scriptorium Estoniae

    Luik, Viivi, 1946-

    1999-01-01

    Tekst saksa ja eesti keeles. Sisu: Weite = Avarus ; "Wie der Krieg..." = "Otsekui sõda..." ; "es vergehn..." = "hävivad..." ; Das Geheimnis = Saladus ; "Ob auch ich weiss..." = "Kas minagi tean..." ; "Ich redete einmal von Feldern..." = "Rääkisin väljadest ükskord..." ; Andere = Teine ; Der Tag = Päev

  2. Comparison of Quality of Life Perceptions of Caregivers of Individuals with Intellectual Disabilities in the United States and the Czech Republic

    Science.gov (United States)

    Raver, Sharon A.; Michalek, Anne M.; Michalik, Jan; Valenta, Milan

    2010-01-01

    Caregivers of individuals with disabilities in the United States have been reported to experience additional hardships than families with typical children as they attempt to balance family and work (Parish, Rose, Grinstein-Weiss, Richman, & Andrews, 2008). In this study, 31 caregivers of individuals with intellectual disabilities from the…

  3. Some remarks on gravitational wave experiments

    International Nuclear Information System (INIS)

    Kafka, P.

    1977-01-01

    I shall first summarize the result of the old Munich-Frascati Webertype experiment, then discuss why the Munich group decided to develop a Weiss-Forward-type laser interferometer, and finally I shall sketch the strategy of optimal detection of collapse events with the latter type of antenna. (orig.) [de

  4. AFIT/AFOSR Workshop on the Role of Wavelets in Signal Processing Applications

    Science.gov (United States)

    1992-08-28

    Stein and G. Weiss, "Fourier analysis on Eucildean spaces," Princeton University Press, 1971. [V] G. Vitali, Sulla condizione di chiusura di un sistema ...present the more general framework into wavelets fit, suggesting hence companion ways of time-scale analysis for self-similar and 1/f-type processes

  5. A large deviations approach to the transient of the Erlang loss model

    NARCIS (Netherlands)

    Mandjes, M.R.H.; Ridder, Annemarie

    2001-01-01

    This paper deals with the transient behavior of the Erlang loss model. After scaling both arrival rate and number of trunks, an asymptotic analysis of the blocking probability is given. Apart from that, the most likely path to blocking is given. Compared to Shwartz and Weiss [Large Deviations for

  6. Neutron diffraction study of quasi-one-dimensional spin-chain ...

    Indian Academy of Sciences (India)

    The high temperature magnetic susceptibility obeys the Curie–Weiss law; the value of the paramagnetic Curie temperature () decreases as the concentration of iron increases and it becomes negative for = 0.4. No extra Bragg peak as well as no observable enhancement in the intensity of the fundamental (nuclear) ...

  7. Positive Harris recurrence and diffusion scale analysis of a push pull queueing network

    NARCIS (Netherlands)

    Nazarathy, J.; Weiss, G.

    2010-01-01

    We consider a push pull queueing network with two servers and two types of job which are processed by the two servers in opposite order, with stochastic generally distributed processing times. This push pull network was introduced by Kopzon and Weiss, who assumed exponential processing times. It is

  8. Stable cohomology of the universal Picard varieties and the extended mapping class group

    DEFF Research Database (Denmark)

    Ebert, Johannes; Randal-Williams, Oscar

    2012-01-01

    We study the moduli spaces which classify smooth surfaces along with a complex line bundle. There are homological stability and Madsen--Weiss type results for these spaces (mostly due to Cohen and Madsen), and we discuss the cohomological calculations which may be deduced from them. We then relat...

  9. Broad and Narrow CHC Abilities Measured and Not Measured by the Wechsler Scales: Moving beyond Within-Battery Factor Analysis

    Science.gov (United States)

    Flanagan, Dawn P.; Alfonso, Vincent C.; Reynolds, Matthew R.

    2013-01-01

    In this commentary, we reviewed two clinical validation studies on the Wechsler Scales conducted by Weiss and colleagues. These researchers used a rigorous within-battery model-fitting approach that demonstrated the factorial invariance of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) and Wechsler Adult Intelligence…

  10. Die Leben Einsteins eine Reise durch die Geschichte der Physik

    CERN Document Server

    Fiami

    2005-01-01

    Jeder kennt die Namen Einstein, Newton oder Galilei. aber was weiss man über sie? Hier ein Porträt Einsteins anhand von sechs Meilensteinen aus der Geschichte der Physik. Einstein tritt auf als Protagonist in verschiedenen Epochen und bei verschiedenen Entdeckungen, die die Welt verändert haben.

  11. 75 FR 65528 - Membership of National Science Foundation's Senior Executive Service Performance Review Board

    Science.gov (United States)

    2010-10-25

    ... Director, Division of Human Resource Management and Chief Human Capital Officer, National Science..., Division of Human Resource Management and Chief Human Capital Officer; Mark L. Weiss, Director, Division of... Human Resource Management and Chief Human Capital Officer. [FR Doc. 2010-26763 Filed 10-22-10; 8:45 am...

  12. What is Conceptualism?

    DEFF Research Database (Denmark)

    Jensen, Boris Brorman

    2017-01-01

    ‘Adventures in Conceptualism’ by Kristoffer Lindhardt Weiss explores the method of concept-based architecture through a series of conversations with some of the world’s leading architects and urbanists. From offices such as Snøhetta, BIG, NL Architects and Danish HLA, the production of formal div...

  13. Enhanced Processing of Vocal Melodies in Childhood

    Science.gov (United States)

    Weiss, Michael W.; Schellenberg, E. Glenn; Trehub, Sandra E.; Dawber, Emily J.

    2015-01-01

    Music cognition is typically studied with instrumental stimuli. Adults remember melodies better, however, when they are presented in a biologically significant timbre (i.e., the human voice) than in various instrumental timbres (Weiss, Trehub, & Schellenberg, 2012). We examined the impact of vocal timbre on children's processing of melodies.…

  14. US Army Biomedical Laboratory Annual Progress Report, Fiscal Year 1980

    Science.gov (United States)

    1980-10-01

    Methodological, neurochemical, and neuropsychological aspects. In. B. Weiss & V.G. Laties (eds.) Behavioral Toxicology, New York: Plenum Press, 155-219 (1975...antagonizes an a-adrenergic " hunger " system in the rat. Nature. 226, 963-964 (1970). 130. Lipp, J.A. Effect of benzodiazapine derivatives on soman-induced

  15. Topic III - Infiltration and Drainage: A section in Joint US Geological Survey, US Nuclear Regulatory Commission workshop on research related to low-level radioactive waste disposal, May 4-6, 1993, National Center, Reston, Virginia; Proceedings (WRI 95-4015)

    Science.gov (United States)

    Prudic, David E.; Gee, Glendon; Stevens, Peter R.; Nicholson, Thomas J.

    1996-01-01

    Infiltration into and drainage from facilities for the disposal of low-level radioactive wastes is considered the major process by which non-volatile contaminants are transported away from the facilities. The session included 10 papers related to the processes of infiltration and drainage, and to the simulation of flow and transport through the unsaturated zone. The first paper, presented by David Stonestrom, was an overview regarding the application of unsaturated flow theory to infiltration and drainage. Stonestrom posed three basic questions, which are:How well do we know the relevant processes affecting flow and transport?How well can we measure the parametric functions used to quantify flow and transport?How do we treat complexities inherent in field settings?The other nine papers presented during the session gave some insight to these questions. Topics included: laboratory measurement of unsaturated hydraulic conductivities at low water contents, by John Nimmo; use of environmental tracers to identify preferential flow through fractured media and to quantify drainage, by Edmund Prych and Edwin Weeks; field experiments to evaluate relevant processes affecting infiltration and drainage, by Brian Andraski, Glendon Gee, and Peter Wierenga; and the use of determinist'c and stochastic models for simulating flow and transport through heterogeneous sediments, by Richard Hills, Lynn Gelhar, and Shlomo Neuman.

  16. Reply to Comment on ‘Oxygen vacancy-induced magnetic moment in edge-sharing CuO2 chains of Li2CuO2-δ ’

    Science.gov (United States)

    Shu, G. J.; Tian, J. C.; Lin, C. K.; Hayashi, M.; Liou, S. C.; Chen, W. T.; Wong, Deniz P.; Liou, H. L.; Chou, F. C.

    2018-05-01

    In this reply to the comment on ‘Oxygen vacancy-induced magnetic moment in edge-sharing CuO2 chains of {{{Li}}}2{{{CuO}}}2-δ ’ (2017 New Journal of Physics 19 023206), we have clarified several key questions and conflicting results regarding the size of the intra-chain nearest neighbor coupling J 1 and the sign of the Weiss temperature Θ defined in the Curie–Weiss law of χ(T) = χ ◦ + C/(T ‑ Θ). Additional data analysis is conducted to verify the validity of the Curie–Weiss law fitting protocol, including the negative sign and size of Θ based on the high-temperature linear temperature dependence of 1/χ(T) for T > J 1 and \\tfrac{g{μ }B{SH}}{{k}BT}\\ll 1. The consistency between the magnetic antiferromagnetic (AF) ground state below T N and the negative sign of Θ in the high-temperature paramagnetic (PM) state is explained via the reduction of thermal fluctuation for a temperature-independent local field due to magnetic interaction of quantum nature. A magnetic dipole–dipole (MDD)-type interaction among FM chains is identified and proposed to be necessary for the 3D AF magnetic ground state formation, i.e., the Heisenberg model of an exchange-type interaction alone is not sufficient to fully describe the quasi-1D spin chain system of {{{Li}}}2{{{CuO}}}2. Several typical quasi-1D spin chain compounds, including {{{Li}}}2{{{CuO}}}2,{{{CuAs}}}2{{{O}}}4,{{{Sr}}}3{{{Fe}}}2{{{O}}}5, and CuGeO3, are compared to show why different magnetic ground states are achieved from the chemical bond perspective.

  17. Analysis of adrenocortical tumours morphology as regards their structure and potential malignancy

    International Nuclear Information System (INIS)

    Kajor, M.; Ciupinska-Kajor, M.; Dobrosz, Z.; Ziaja, J.; Krol, R.; Heitzman, M.; Cierpka, L.

    2006-01-01

    Introduction: A consequence of diagnosis of adrenocortical carcinoma (ACC) is introduction of pharmacological therapy, precise monitoring of the patients and in some cases re-operation. The aim of the study is to analyse morphology of adrenocortical tumours as regards their malignancy by use of criteria proposed by Weiss. Material and methods: 110 adrenocortical tumours in 107 patients were analysed (M 27.1%, F 72.9%; age 32 to 77 years, mean 55.2 ± 9.7). Conn syndrome was diagnosed in 16 patients (14.9%), Cushing syndrome in 12 (11.2%), and virilisation in 3 (2.8%). In 76 patients (71.0%) biochemical tests did not reveal hormonal hyperactivity of the tumour. Results: In routine histopathological examination ACC was diagnosed in 6 tumours (5.4%), adrenocortical adenoma (ACA) in 92 (83.6%) and adrenocortical hyperplasia in 12 (10.9%). Nuclear grade III or IV was observed in 8 tumours (7.3%), mitotic rate > 5/50 high power fields in 6 (5.4%), atypical mitoses in 5 (4.5%), clear cells constituting < 25% of the tumour in 10 (9.1%), diffuse architecture in 8 (7.3%), necrosis in 16 (14.5%), veins infiltration in 4 (3.6%), sinusoids infiltration in 7 (6.3%), and tumour capsule infiltration in 5 (4.5%). Among ACC tumours 4 - 9 features of malignancy were present, among ACA - 0 - 3 features. Statistical analysis revealed correlation between number of criteria proposed by Weiss and maximal tumour size (p < 0.05). Conclusion: The structure and cell arrangement in adrenocortical adenoma are heterogeneous. Application of criteria proposed by Weiss in histopathological examination of adrenocortical tumours can be useful in differentiating adrenocortical adenoma from carcinoma. (author)

  18. Weiss oscillations and particle-hole symmetry at the half-filled Landau level

    Science.gov (United States)

    Cheung, Alfred K. C.; Raghu, S.; Mulligan, Michael

    2017-06-01

    Particle-hole symmetry in the lowest Landau level of the two-dimensional electron gas requires the electrical Hall conductivity to equal ±e2/2 h at half filling. We study the consequences of weakly broken particle-hole symmetry for magnetoresistance oscillations about half filling in the presence of an applied periodic one-dimensional electrostatic potential using the Dirac composite fermion theory proposed by Son [Son, Phys. Rev. X 5, 031027 (2015), 10.1103/PhysRevX.5.031027]. At fixed electron density, the oscillation minima are asymmetrically biased towards higher magnetic fields, while at fixed magnetic field the oscillations occur symmetrically as the electron density is varied about half filling. We find an approximate "sum rule" obeyed for all pairs of oscillation minima that can be tested in experiment. The locations of the magnetoresistance oscillation minima for the composite fermion theory of Halperin, Lee, and Read (HLR) and its particle-hole conjugate agree exactly. Within the current experimental resolution, the locations of the oscillation minima produced by the Dirac composite fermion coincide with those of HLR. These results may indicate that all three composite fermion theories describe the same long-wavelength physics.

  19. SU-F-R-35: Repeatability of Texture Features in T1- and T2-Weighted MR Images

    International Nuclear Information System (INIS)

    Mahon, R; Weiss, E; Karki, K; Hugo, G; Ford, J

    2016-01-01

    in specific applications such as tissue classification and changes during radiation therapy utilizing a standard imaging protocol. Authors have the following disclosures: a research agreement with Philips Medical systems (Hugo, Weiss), a license agreement with Varian Medical Systems (Hugo, Weiss), research grants from the National Institute of Health (Hugo, Weiss), UpToDate royalties (Weiss), and none(Mahon, Ford, Karki). Authors have no potential conflicts of interest to disclose.

  20. SU-F-R-35: Repeatability of Texture Features in T1- and T2-Weighted MR Images

    Energy Technology Data Exchange (ETDEWEB)

    Mahon, R; Weiss, E; Karki, K; Hugo, G [Virginia Commonwealth University, Richmond, VA (United States); Ford, J [University of Miami Miller School of Medicine, Miami, FL (United States)

    2016-06-15

    in specific applications such as tissue classification and changes during radiation therapy utilizing a standard imaging protocol. Authors have the following disclosures: a research agreement with Philips Medical systems (Hugo, Weiss), a license agreement with Varian Medical Systems (Hugo, Weiss), research grants from the National Institute of Health (Hugo, Weiss), UpToDate royalties (Weiss), and none(Mahon, Ford, Karki). Authors have no potential conflicts of interest to disclose.

  1. Large scale visualization on the Cray XT3 using ParaView.

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, David; Geveci, Berk (Kitware, Inc.); Eschenbert, Kent (Pittsburgh Supercomputing Center); Neundorf, Alexander (Technical University of Kaiserslautern); Marion, Patrick (Kitware, Inc.); Moreland, Kenneth D.; Greenfield, John

    2008-05-01

    Post-processing and visualization are key components to understanding any simulation. Porting ParaView, a scalable visualization tool, to the Cray XT3 allows our analysts to leverage the same supercomputer they use for simulation to perform post-processing. Visualization tools traditionally rely on a variety of rendering, scripting, and networking resources; the challenge of running ParaView on the Lightweight Kernel is to provide and use the visualization and post-processing features in the absence of many OS resources. We have successfully accomplished this at Sandia National Laboratories and the Pittsburgh Supercomputing Center.

  2. Magnetic fusion energy and computers. The role of computing in magnetic fusion energy research and development (second edition)

    International Nuclear Information System (INIS)

    1983-01-01

    This report documents the structure and uses of the MFE Network and presents a compilation of future computing requirements. Its primary emphasis is on the role of supercomputers in fusion research. One of its key findings is that with the introduction of each successive class of supercomputer, qualitatively improved understanding of fusion processes has been gained. At the same time, even the current Class VI machines severely limit the attainable realism of computer models. Many important problems will require the introduction of Class VII or even larger machines before they can be successfully attacked

  3. Publisher Correction

    DEFF Research Database (Denmark)

    Bonàs-Guarch, Sílvia; Guindo-Martínez, Marta; Miguel-Escalada, Irene

    2018-01-01

    In the originally published version of this Article, the affiliation details for Santi González, Jian'an Luan and Claudia Langenberg were inadvertently omitted. Santi González should have been affiliated with 'Barcelona Supercomputing Center (BSC), Joint BSC-CRG-IRB Research Program in Computatio......In the originally published version of this Article, the affiliation details for Santi González, Jian'an Luan and Claudia Langenberg were inadvertently omitted. Santi González should have been affiliated with 'Barcelona Supercomputing Center (BSC), Joint BSC-CRG-IRB Research Program...

  4. Bioaerosol detection by aerosol TOF-mass spectrometry: Application of matrix assisted laser desorption/ionisation

    NARCIS (Netherlands)

    Wuijckhuijse, A.L. van; Stowers, M.A.; Kientz, Ch.E.; Marijnissen, J.C.M.; Scarlett, B.

    2000-01-01

    In previous publications the use of an aerosol time of flight mass spectrometer was reported for the on-line measurements of aerosols (Weiss 1997, Kievit 1995). The apparatus is capable of measuring the size as well as the chemical composition, by the use of Laser Desorption/Ionisation (LDI), of an

  5. Homology of the open moduli space of curves

    DEFF Research Database (Denmark)

    Madsen, Ib Henning

    2012-01-01

    This is a survey on the proof of a generalized version of the Mumford conjecture obtained in joint work with M. Weiss stating that a certain map between some classifying spaces which a priori have different natures induces an isomorphism at the level of integral homology. We also discuss our proo...

  6. SHORT COMMUNICATION Serological profiles of Herpes simplex ...

    African Journals Online (AJOL)

    Dr.Mirambo

    Journal of Infectious Diseases, 185, 45-52. Watson-Jones, D., Weiss, H.A., Rusizoka, M., Changalucha, J., Baisley, K., Mugeye, K., Tanton, C.,. Ross, D., Everett, D. & Clayton, T. (2008) Effect of herpes simplex suppression on incidence of HIV among women in Tanzania. New England Journal of Medicine 358: 1560-1571.

  7. Standard diffusive systems are well-posed linear systems

    NARCIS (Netherlands)

    Matignon, Denis; Zwart, Heiko J.

    2004-01-01

    The class of well-posed linear systems as introduced by Salamon has become a well-understood class of systems, see e.g. the work of Weiss and the book of Staffans. Many partial partial differential equations with boundary control and point observation can be formulated as a well-posed linear system.

  8. Granularity and textural analysis as a proxy for extreme wave events ...

    Indian Academy of Sciences (India)

    Tappin 2007). Several studies contributed to the confirmation of sedimentological signatures in regions like Kenya, southwest coast of India, west. Banda Aceh, Peru, Papua New Guinea, southern. Kuril trench by Bahlburg and Weiss (2007), Babu et al (2007), Morton et al (2007), Nanayama et al. (2007), Paris et al (2007).

  9. Simmi "Head käed" tuli, nägi ja võitis / Triin Tael

    Index Scriptorium Estoniae

    Tael, Triin

    2002-01-01

    Berliini 52. filmifestivali "Panorama Special" kavas olnud Peeter Simmi Eesti-Läti ühisfilm "Head käed" saavutas hea publikuedu ning jagas sloveenlase Maja Weiss'i filmiga "Piirivalvur" Manfred Salzgeberi auhinda, mis on mõeldud subtiitrite tegemiseks filmi levitamisel euroopa ekraanidele. P. Simm on rahul ja plaanib uut filmi

  10. Developing Individual and Team Character in Sport

    Science.gov (United States)

    Gaines, Stacey A.

    2012-01-01

    The idea that participation in sport builds character is a long-standing one. Advocates of sport participation believe that sport provides an appropriate context for the learning of social skills such as cooperation and the development of prosocial behavior (Weiss, Smith, & Stuntz, 2008). Research in sport regarding character development has…

  11. The Dramaturgy of Fact: The Testament of History in Two Anti-War Plays.

    Science.gov (United States)

    Shafer, George

    1978-01-01

    The dramaturgical dimensions of the "theater of fact" as found in two anti-war plays, "Discourse on Viet Nam" by Peter Weiss and "Xa: A Vietnam Primer" by the ProVisional Theatre are examined. In these plays the author finds that Vietnamese history becomes rhetorical testament in arguments against United States…

  12. Meeli Kõiva vitraažid ajakirja Stained Glass talvenumbris / Rein Eriksson

    Index Scriptorium Estoniae

    Eriksson, Rein

    1998-01-01

    Intervjuust Meeli Kõivaga Ameerika Ühendriikide vitraažikunstnike Assotsiatsiooni ajakirja "Stained Glass" talvenumbris. Avaldati 8 fotot kunstniku vitraažidest. Artikli "Unistused transparentsest ruumist" autor - USA vitraažikunstnike Assotsiatsiooni endine president Helene Weiss. M. Kõiva esines aprillis 1997 akvarellidega grupinäitusel New Yorgis, Broadway galeriis Art 54.

  13. Additivity for parametrized topological Euler characteristic and Reidemeister torsion

    OpenAIRE

    Badzioch, Bernard; Dorabiala, Wojciech

    2005-01-01

    Dwyer, Weiss, and Williams have recently defined the notions of parametrized topological Euler characteristic and parametrized topological Reidemeister torsion which are invariants of bundles of compact topological manifolds. We show that these invariants satisfy additivity formulas paralleling the additive properties of the classical Euler characteristic and Reidemeister torsion of finite CW-complexes.

  14. Thermal analysis of building roof assisted with water heater and ...

    Indian Academy of Sciences (India)

    D Prakash

    2018-03-14

    Mar 14, 2018 ... Thermal analysis; building roof; solar water heating system; roof ... These solar collec- ... several benefits, such as its wide range of storage temper- ... rugated plate, rear plate and back insulation material [12]. ..... [7] Weiss W and Rommel M 2008 Process heat collectors. State of the art within Task 33/IV.

  15. New wars, new morality?

    NARCIS (Netherlands)

    Akkerman, T.

    2009-01-01

    Has war fundamentally changed? If so, it may be time for reconsidering accepted moral standards for waging wars and for conduct in war. The new war thesis holds that wars have fundamentally altered since the end of the Cold War. Proponents such as Kaldor and Weiss hold that wars today are intrastate

  16. On the unboundedness of control operators for bilinear systems ...

    African Journals Online (AJOL)

    The aim of this work is to study the classes of unbounded linear control operators which ensure the existence and uniqueness of the mild and strong solutions of certain bilinear control systems. By an abstract approach, similar to that adopted by Weiss [18], we obtain a connection between these classes and those ...

  17. Relationships between Adult Workers' Spiritual Well-Being and Job Satisfaction: A Preliminary Study

    Science.gov (United States)

    Robert, Tracey E.; Young, J. Scott; Kelly, Virginia A.

    2006-01-01

    The authors studied the relationships between adult workers' spiritual well-being and job satisfaction. Two hundred participants completed 2 instruments: the Spiritual Well-Being Scale (C. W. Ellison & R. F. Paloutzian, 1982) and the Minnesota Satisfaction Questionnaire Short Form (D. J. Weiss, R. V. Dawis, G. W. England, & L. H. Lofquist,…

  18. Compulsory Project-Level Involvement and the Use of Program-Level Evaluations: Evaluating the Local Systemic Change for Teacher Enhancement Program

    Science.gov (United States)

    Johnson, Kelli; Weiss, Iris R.

    2011-01-01

    In 1995, the National Science Foundation (NSF) contracted with principal investigator Iris Weiss and an evaluation team at Horizon Research, Inc. (HRI) to conduct a national evaluation of the Local Systemic Change for Teacher Enhancement program (LSC). HRI conducted the core evaluation under a $6.25 million contract with NSF. This program…

  19. Small Business and the Public Library: Strategies for a Successful Partnership

    Science.gov (United States)

    Weiss, Luise; Serlis-McPhillips, Sophia; Malafi, Elizabeth

    2011-01-01

    Aligning with current difficult economic times, this book helps libraries assist users entering or already involved in the small business community. Authors Weiss, Serlis-McPhillips, and Malafi are public librarians who have incorporated small business services within their library. In their book they point the way to addressing the needs of job…

  20. Incipient ferroelectricity of water molecules confined to nano-channels of beryl

    Czech Academy of Sciences Publication Activity Database

    Gorshunov, B. P.; Torgashev, V. I.; Zhukova, E.S.; Thomas, V.G.; Belyanchikov, M. A.; Kadlec, Christelle; Kadlec, Filip; Savinov, Maxim; Ostapchuk, Tetyana; Petzelt, Jan; Prokleška, J.; Tomas, P. V.; Pestrjakov, E.V.; Fursenko, D.A.; Shakurov, G.S.; Prokhorov, A. S.; Gorelik, V. S.; Kadyrov, L.S.; Uskov, V.V.; Kremer, R. K.; Dressel, M.

    2016-01-01

    Roč. 7, Sep (2016), 1-10, č. článku 12842. ISSN 2041-1723 R&D Projects: GA ČR(CZ) GA14-25639S Institutional support: RVO:68378271 Keywords : water * beryl * ferroelectricity * quantum fluctuations * Curie–Weiss behaviour Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 12.124, year: 2016

  1. Image Understanding Workshop. Proceedings of a Workshop Held in San Diego, California on January 26-29, 1992

    Science.gov (United States)

    1992-01-01

    8217 Fracking , 19. Giblin, P. and R. Weiss. (1987). "Reconstruction of Detection, and 3D Representation of Potential Surfaces from Profiles," Proc. of...networks, until recently the system has been far from Figure2: The single original video image is shifted and rotated ’The Navlab has a hydraulic drive

  2. HIV knowledge, disclosure and sexual risk among pregnant women ...

    African Journals Online (AJOL)

    Molatelo Elisa Shikwane

    2014-01-03

    Jan 3, 2014 ... To cite this article: Molatelo Elisa Shikwane, Olga M. Villar-Loubet, Stephen M. Weiss, Karl Peltzer & Deborah L. Jones. (2013) HIV knowledge, disclosure and sexual risk among pregnant women and their partners in rural South Africa, SAHARA-. J: Journal of Social Aspects of HIV/AIDS: An Open Access ...

  3. Challenges in scaling NLO generators to leadership computers

    Science.gov (United States)

    Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.

    2017-10-01

    Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.

  4. Application experiences with the Globus toolkit.

    Energy Technology Data Exchange (ETDEWEB)

    Brunett, S.

    1998-06-09

    The Globus grid toolkit is a collection of software components designed to support the development of applications for high-performance distributed computing environments, or ''computational grids'' [14]. The Globus toolkit is an implementation of a ''bag of services'' architecture, which provides application and tool developers not with a monolithic system but rather with a set of stand-alone services. Each Globus component provides a basic service, such as authentication, resource allocation, information, communication, fault detection, and remote data access. Different applications and tools can combine these services in different ways to construct ''grid-enabled'' systems. The Globus toolkit has been used to construct the Globus Ubiquitous Supercomputing Testbed, or GUSTO: a large-scale testbed spanning 20 sites and included over 4000 compute nodes for a total compute power of over 2 TFLOPS. Over the past six months, we and others have used this testbed to conduct a variety of application experiments, including multi-user collaborative environments (tele-immersion), computational steering, distributed supercomputing, and high throughput computing. The goal of this paper is to review what has been learned from these experiments regarding the effectiveness of the toolkit approach. To this end, we describe two of the application experiments in detail, noting what worked well and what worked less well. The two applications are a distributed supercomputing application, SF-Express, in which multiple supercomputers are harnessed to perform large distributed interactive simulations; and a tele-immersion application, CAVERNsoft, in which the focus is on connecting multiple people to a distributed simulated world.

  5. Bystanders' Reactions to Witnessing Repetitive Abuse Experiences

    Science.gov (United States)

    Janson, Gregory R.; Carney, JoLynn V.; Hazler, Richard J.; Oh, Insoo

    2009-01-01

    The Impact of Event Scale-Revised (D. S. Weiss & C. R. Marmar, 1997) was used to obtain self-reported trauma levels from 587 young adults recalling childhood or adolescence experiences as witnesses to common forms of repetitive abuse defined as bullying. Mean participant scores were in a range suggesting potential need for clinical assessment…

  6. [Heinz von zur Mühlen. Auf den Spuren einiger revlaer Firmen und Familien] / Paul Kaegbein

    Index Scriptorium Estoniae

    Kaegbein, Paul

    2007-01-01

    Arvustus: Heinz von zur Mühlen. Auf den Spuren einiger revlaer Firmen und Familien. In : Buch und Bildung im Baltikum. Münster : LIT, 2005, lk. 527-541. Pika tänava majade omanikest alates 17. sajandist - perekonnad Koch, Meyer, Kluge, Ströhm, Wassermann, Glehn, Eggers, Koppelson, Weiss. Nii loob autor pildi Tallinna "firmade ajaloost"

  7. Bibliography of Germfree Research 1885-1963. 1979 Supplement,

    Science.gov (United States)

    1979-01-01

    Turton, J. A. Eradication of the pinworm Syphacia obvelata from an animal unit by anthelmintic therapy . Lab. Anim. 13(2):115-118, 1979. _p_ 176. Patte, C...Bull. Exp. Biol. k Med. (Engl. tr.). 86(9):1217, 1978. (Rs) 182. Pollack, J. D., Weiss, H. S., and Somerson, N. L. Lecithin changes in murine myco

  8. Final Report on the Study of the Impact of the Statewide Systemic Initiatives. Lessons Learned about Designing, Implementing, and Evaluating Statewide Systemic Reform. WCER Working Paper No. 2003-12

    Science.gov (United States)

    Heck, Daniel J.; Weiss, Iris R.; Boyd, Sally E.; Howard, Michael N.; Supovitz, Jonathan A.

    2003-01-01

    This document represents the first of two volumes presented in "Study of the Impact of the Statewide Systemic Initiatives Program" (Norman L. Webb and Iris R. Weiss). In an effort to evaluate the impact of the Statewide Systemic Initiatives (SSIs) on student achievement and the lessons that could be learned from the National Science…

  9. Measuring Components of Intelligence: Mission Impossible?

    Science.gov (United States)

    Gregoire, Jacques

    2013-01-01

    The two studies conducted by Weiss, Keith, Zhu, and Chen in 2013 on the Wechsler Adult Intelligence Scale (WAIS-IV) and the Wechsler Intelligence Scale for Children (WISC-IV), respectively, provide strong evidence for the validity of a four-factor solution corresponding to the current hierarchical model of both scales. These analyses support the…

  10. Two-Bin Kanban: Ordering Impact at Navy Medical Center San Diego

    Science.gov (United States)

    2016-06-17

    Wiley. Weed, J. (2010, July 10). Factory efficiency comes to hospital. New York Times, 1–3. Weiss, N. (2008). Introductory statistics . San Francisco...Urology, and Oral Maxillofacial Surgery (OMFS) departments at NMCSD. The data is statistically significant in 2015 when compared to 2013. Procurement...31 3. C. Procurement Cost and Procurement Efficiency Statistics

  11. Disease: H01186 [KEGG MEDICUS

    Lifescience Database Archive (English)

    Full Text Available a disorder associated with an inherited selenocysteine (Sec) incorporation defect, caused by mutations in SECI...eiodinase type 2 (DIO2) enzymatic activity not linked to the DIO2 locus. Inherited metabolic disease SECISBP...C, Boran G, Schomburg L, Weiss RE, Refetoff S ... TITLE ... Mutations in SECISBP2 result in abnormal thyroid h

  12. Frame properties of wave packet systes in L^2 (R^d)

    DEFF Research Database (Denmark)

    Christensen, Ole; Rahimi, Asghar

    2008-01-01

    Extending work by Hernandez, Labate and Weiss, we present a sufficent condition for a generalized shift-invariant system to be a Bessel sequence or even a frame forL(2)(R-d). In particular, this leads to a sufficient condition for a wave packet system to form a frame. On the other hand, we show...

  13. Flicien Kabuga, multimillionaire Rwandan businessman, Bosco ...

    African Journals Online (AJOL)

    gerhard

    BUTTERFLIES OF UGANDA: MEMORIES OF A CHILD SOLDIER. Darin Dahms and Sönke C. Weiss. Joh Brendow & Sohn Verlag GmbH. 2007. 140 pages. ISBN 3865062040, 9783865062048. "I was conceived in rape."1 At least for this reviewer, this is one of the most powerful, hard-hitting opening lines of any book he ...

  14. Productive Parallel Programming: The PCN Approach

    Directory of Open Access Journals (Sweden)

    Ian Foster

    1992-01-01

    Full Text Available We describe the PCN programming system, focusing on those features designed to improve the productivity of scientists and engineers using parallel supercomputers. These features include a simple notation for the concise specification of concurrent algorithms, the ability to incorporate existing Fortran and C code into parallel applications, facilities for reusing parallel program components, a portable toolkit that allows applications to be developed on a workstation or small parallel computer and run unchanged on supercomputers, and integrated debugging and performance analysis tools. We survey representative scientific applications and identify problem classes for which PCN has proved particularly useful.

  15. Effect of oxygen vacancies on magnetic and transport properties of Sr2IrO4

    Science.gov (United States)

    Dwivedi, Vinod Kumar; Mukhopadhyay, Soumik

    2018-05-01

    Iridates have recently attracted growing interest because of their potential for realizing various interesting phases like interaction driven Mott-type insulator and magnetically driven Slater-type. In this paper, we present the magnetic and electrical transport properties of polycrystalline Sr2IrO4 synthesized by solid state reaction route. We find a ferromagnetic transition at 240 K. The Curie-Weiss law behavior hold good above the magnetic transition temperature TMag = 240 K with a small effective paramagnetic magnetic moment μeff = 0.25 µB/f.u. and a Curie-Weiss temperature, θCW = +100 K. Zero field cooled (ZFC) magnetization shows a gradual dcrease below 150 K, while same for field cooled (FC) below 50 K. Interestingly, below temperatures, ⁓ 10 K, a sharp increase in ZFC and FC magnetization can be seen. A temperature dependent resistivity reveals insulating behavior followed by power law mechanism. The sintering of sample in air leads to the very low value of resistivity is likely related to Sr or oxygen vacancies.

  16. Ferromagnetic and paramagnetic magnetization of implanted GaN:Ho,Tb,Sm,Tm films

    Energy Technology Data Exchange (ETDEWEB)

    Maryško, M., E-mail: marysko@fzu.cz; Hejtmánek, J.; Laguta, V. [Institute of Physics of ASCR v.v.i., Cukrovarnická 10, 162 00 Prague 6 (Czech Republic); Sofer, Z.; Sedmidubský, D.; Šimek, P.; Veselý, M. [Department of Inorganic Chemistry, Institute of Chemical Technology, 166 28 Prague 6 (Czech Republic); Mikulics, M. [Peter Grünberg Institut, PGI-9, Forschung Centrum, Jülich D-52425 (Germany); JARA, Fundamentals of Future Information Technology, D52425 Jülich (Germany); Buchal, C. [Peter Grünberg Institut, PGI-9, Forschung Centrum, Jülich D-52425 (Germany); Macková, A.; Malínský, P. [Nuclear Physics Institute of the ASCR v.v.i., 250 68 Řež (Czech Republic); Department of Physics, Faculty of Science, J.E.Purkinje University, České mládeže, 400 96 Ústí nad Labem (Czech Republic); Wilhelm, R. A. [Helmholtz-Zentrum Dresden-Rossendorf, Institute of Ion Beam Physics and Materials Research, Dresden (Germany); Technische Universität Dresden, 01062 Dresden (Germany)

    2015-05-07

    The SQUID magnetic measurements were performed on the GaN films prepared by metal-organic vapour phase epitaxy and implanted by Tb{sup 3+}, Tm{sup 3+}, Sm{sup 3+}, and Ho{sup 3+} ions. The sapphire substrate was checked by the electron paramagnetic resonance method which showed a content of Cr{sup 3+} and Fe{sup 3+} impurities. The samples 5 × 5 mm{sup 2} were positioned in the classical straws and within an estimated accuracy of 10{sup −6 }emu, no ferromagnetic moment was detected in the temperature region of 2–300 K. The paramagnetic magnetization was studied for parallel and perpendicular orientation. In the case of GaN:Tb sample, at T = 2 K, a pronounced anisotropy with the easy axis perpendicular to the film was observed which can be explained by the lowest quasi-doublet state of the non-Kramers Tb{sup 3+} ion. The Weiss temperature deduced from the susceptibility data using the Curie-Weiss (C-W) law was found to depend substantially on the magnetic field.

  17. High-capacity cation-exchange column for enhanced resolution of adjacent peaks of cations in ion chromatography.

    Science.gov (United States)

    Rey, M A

    2001-06-22

    One of the advantages of ion chromatography [Anal Chem. 47 (1975) 1801] as compared to other analytical techniques is that several ions may be analyzed simultaneously. One of the most important contributions of cation-exchange chromatography is its sensitivity to ammonium ion, which is difficult to analyze by other techniques [J. Weiss, in: E.L. Johnson (Ed.), Handbook of Ion Chromatography, Dionex, Sunnyvale, CA, USA]. The determination of low concentrations of ammonium ion in the presence of high concentrations of sodium poses a challenge in cation-exchange chromatography [J. Weiss, Ion Chromatography, VCH, 2nd Edition, Weinheim, 1995], as both cations have similar selectivities for the common stationary phases containing either sulfonate or carboxylate functional groups. The task was to develop a new cation-exchange stationary phase (for diverse concentration ratios of adjacent peaks) to overcome limitations experienced in previous trails. Various cation-exchange capacities and column body formats were investigated to optimize this application and others. The advantages and disadvantages of two carboxylic acid columns of different cation-exchange capacities and different column formats will be discussed.

  18. Manifestations of Kitaev physics in thermodynamic properties of hexagonal iridates and α-RuCl3

    Science.gov (United States)

    Tsirlin, Alexander

    Kitaev model is hard to achieve in real materials. Best candidates available so far are hexagonal iridates M2IrO3 (M = Li and Na) and the recently discovered α-RuCl3 featuring hexagonal layers coupled by weak van der Waals bonding. I will review recent progress in crystal growth of these materials and compare their thermodynamic properties. Both hexagonal iridates and α-RuCl3 feature highly anisotropic Curie-Weiss temperatures that not only differ in magnitude but also change sign depending on the direction of the applied magnetic field. Néel temperatures are largely suppressed compared to the energy scale of the Curie-Weiss temperatures. These experimental observations will be linked to features of the electronic structure and to structural peculiarities associated with deviations from the ideal hexagonal symmetry. I will also discuss how the different nature of ligand atoms affects electronic structure and magnetic superexchange. This work has been done in collaboration with M. Majumder, M. Schmidt, M. Baenitz, F. Freund, and P. Gegenwart.

  19. Ein statistisches Modell zum Einfluß der thermischen Bewegung auf NMR-Festkörperspektren

    Science.gov (United States)

    Ploss, W.; Freude, D.; Pfeifer, H.; Schmiedel, H.

    Es wird ein statistisches Modell zum Einfluß der thermischen Bewegung auf die NMR-Linienform vorgestellt, das die Verschmälerung von Festkörper-Spektren bei wachsender Temperatur beschreibt. Das Modell geht von der Annahme aus, daß nach einer Ortsveränderung eines Kerns infolge thermischer Bewegung jede beliebige Kernresonanzfrequenz mit der durch das Festkörperspektrum vorgegebenen Wahrscheinlichkeit angenommen werden kann. Am Beispiel der Festkörper-Gaußlinie wird der Unterschied zu dem bekannten Modell von ANDERSON und WEISS verdeutlicht.Translated AbstractA Statistical Model for the Influence of Thermal Motion on N. M. R. Spectra in SolidsA theory is proposed which allows to describe the narrowing of n. m. r.-line width in the presence of thermal motions of the spins. The model is based on the assumption, that the local resonance frequency of a given spin immediately after the jump is distributed according to the n. m. r.-line shape of the rigid lattice. The difference to the well-known ANDERSON-WEISS-model of spectral narrowing is demonstrated for a gaussian line shape.

  20. Microstructure evolution and phase transition in La/Mn doped barium titanate ceramics

    Directory of Open Access Journals (Sweden)

    Vesna Paunović

    2010-12-01

    Full Text Available La/Mn codoped BaTiO3 with different La2O3 content, ranging from 0.1 to 5.0 at% La, was investigated regarding their microstructural and dielectric characteristics. The content of 0.05 at% Mn was constant in all investigated samples. The samples were sintered at 1320°C and 1350°C for two hours. Microstructural studies were done using SEM and EDS analysis. The fine-grained microstructure was obtained even for low content of La. The appearance of secondary abnormal grains with serrated features along grain boundaries was observed in 1.0 at% La-BaTiO3 sintered at 1350°C. Nearly flat permittivity-temperature response was obtained in specimens with 2.0 and 5.0 at% La. Using the modified Curie-Weiss law a critical exponent γ and C’were calculated. The obtained values of γ pointed out the diffuse phase transformation in heavily doped BaTiO3 and great departure from the Curie-Weiss law for low doped ceramics.

  1. Feed value of ensiled pig excreta, poultry litter or urea with ...

    African Journals Online (AJOL)

    Dnkosi

    2014-05-17

    May 17, 2014 ... The N retention and total tract digestion were similar for all treatments. ... Water was added to the material to a level of 400 g/kg DM. The control silage was .... Thus, the cumulative pressure and volume of the fermentation gases ..... effect on cell wall digestibility is the difference in DMI (Weiss & Wyatt, 2004).

  2. Stress Optical Coefficient, Test Methodology, and Glass Standard Evaluation

    Science.gov (United States)

    2016-05-01

    ARL-TN-0756 ● MAY 2016 US Army Research Laboratory Stress Optical Coefficient, Test Methodology , and Glass Standard Evaluation...Stress Optical Coefficient, Test Methodology , and Glass Standard Evaluation by Clayton M Weiss Oak Ridge Institute for Science and Education...ORISE), Belcamp, MD Parimal J Patel Weapons and Materials Research Directorate, ARL Approved for public release; distribution is

  3. Assessment of the Acute Psychiatric Patient in the Emergency Department: Legal Cases and Caveats

    Science.gov (United States)

    2014-05-01

    paranoid behavior. Notable in his evaluation were a blood alcohol level (BAL) of 0.203% and a positive urine screen for marijuana . The EP and social worker...1994; 24(4):672–677. 7. Dubin WR, Weiss KJ, Zeccardi JA. Organic brain syndrome: The psychiatric imposter. JAMA. 1993;249(1):60–62. 8. Tintinalli JE

  4. Fulltext PDF

    Indian Academy of Sciences (India)

    Shivashankar, India.and PS Weiss, USA. .. Motors and Macromolecular Assemblie,~: H C Berg, J Kuriyan, J A Spudich, R D Vale, USA and T. L Blundell, UK. Networks: U Alon, Israel, M Elowitz and E D Siggia, USA. Membrttne Proteins Structure, Fun.ction and Foldillg: P Booth, UK, F Bezanilia, D M Enociman,. G P Hess, E ...

  5. gauge fields

    Indian Academy of Sciences (India)

    Here it has been found that the real form of Yang's self-dual equations [2,3] passes the Painlevé test for integrability in the sense of Weiss et al [4] and admit truncation of series leading to non-trivial exact solutions obtained previously and auto-Backlund transformation between two pairs of these solutions (see for example.

  6. Introductory lectures on critical phenomena

    International Nuclear Information System (INIS)

    Khajehpour, M.R.H.

    1988-09-01

    After a presentation of classical models for phase transitions and critical phenomena (Van der Waals theory, Weiss theory of ferromagnetism) and theoretical models (Ising model, XY model, Heisenberg model, spherical model) the Landau theory of critical and multicritical points and some single applications of renormalization group method in static critical phenomena are presented. 115 refs, figs and tabs

  7. The Flynn Effect and Its Critics: Rusty Linchpins and "Lookin' for g and Gf in Some of the Wrong Places"

    Science.gov (United States)

    McGrew, Kevin S.

    2010-01-01

    The consensus of most intelligence scholars is that the Flynn effect (FE) is real, IQ test batteries are now routinely restandardized on a regular basis. A cornerstone in Flynn's explanation of the FE is his analysis of select Wechsler subtest scores across time. The featured articles by Kaufman and Zhou, Zhu, and Weiss question whether Flynn's…

  8. Case report

    African Journals Online (AJOL)

    raoul

    21 sept. 2011 ... Pityriasis rubra pilaris and HIV infection. J Am Acad Dermatol. 1991; 24:703-705. This article on PubMed. 4. Sharma S, Weiss GR, Paulger B. Pityriasis rubra pilaris as an initial presentation of hepatocellular carcinoma. Dermatology. 1997; 194. (2):166-7. This article on PubMed. 5. Polat M, Lenk N, Ustun H, ...

  9. SU-F-J-67: Dosimetric Changes During Radiotherapy in Lung Cancer Patients with Atelectasis

    Energy Technology Data Exchange (ETDEWEB)

    Guy, C; Weiss, E; Jan, N; Reshko, L; Hugo, G [Virginia Commonwealth University, Richmond, VA (United States); Christensen, G [University of Iowa, Iowa City, IA (United States)

    2016-06-15

    Medical systems (Hugo, Weiss), National Institutes of Health (Hugo, Weiss, Christensen), and Roger Koch (Christensen) support, UpToDate (Weiss) royalties, and Varian Medical Systems (Hugo, Weiss) license. No potential conflicts of interest.

  10. Status of the Fermilab lattice supercomputer project

    International Nuclear Information System (INIS)

    Mackenzie, P.; Eichten, E.; Hockney, G.

    1988-10-01

    Fermilab has completed construction of a sixteen node (320 megaflop peak speed) parallel computer for lattice gauge theory calculations. The architecture was designed to provide the highest possible cost effectiveness while maintaining a high level of programmability and constraining as little as possible the types of lattice problems which can be done on it. The machine is programmed in C. It is a prototype for a 256 node (5 gigaflop peak speed) computer which will be assembled this winter. 6 refs

  11. Experimental HEP supercomputing at F.S.U

    International Nuclear Information System (INIS)

    Levinthal, D.; Goldman, H.; Hodous, M.F.

    1987-01-01

    We have developed a track reconstruction algorithm that will work with any 2-dimensional detector as long as the 2-dimensional projections of that detector are ''true'' projections. The program is implemented on a 2-pipe CDC CYBER-205 with 4 million words of memory. (orig./HSI)

  12. Data Mining Supercomputing with SAS JMP® Genomics

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2011-02-01

    Full Text Available JMP® Genomics is statistical discovery software that can uncover meaningful patterns in high-throughput genomics and proteomics data. JMP® Genomics is designed for biologists, biostatisticians, statistical geneticists, and those engaged in analyzing the vast stores of data that are common in genomic research (SAS, 2009. Data mining was performed using JMP® Genomics on the two collections of microarray databases available from National Center for Biotechnology Information (NCBI for lung cancer and breast cancer. The Gene Expression Omnibus (GEO of NCBI serves as a public repository for a wide range of highthroughput experimental data, including the two collections of lung cancer and breast cancer that were used for this research. The results for applying data mining using software JMP® Genomics are shown in this paper with numerous screen shots.

  13. Performance Analysis of FEM Algorithmson GPU and Many-Core Architectures

    KAUST Repository

    Khurram, Rooh

    2015-04-27

    The roadmaps of the leading supercomputer manufacturers are based on hybrid systems, which consist of a mix of conventional processors and accelerators. This trend is mainly due to the fact that the power consumption cost of the future cpu-only Exascale systems will be unsustainable, thus accelerators such as graphic processing units (GPUs) and many-integrated-core (MIC) will likely be the integral part of the TOP500 (http://www.top500.org/) supercomputers, beyond 2020. The emerging supercomputer architecture will bring new challenges for the code developers. Continuum mechanics codes will particularly be affected, because the traditional synchronous implicit solvers will probably not scale on hybrid Exascale machines. In the previous study[1], we reported on the performance of a conjugate gradient based mesh motion algorithm[2]on Sandy Bridge, Xeon Phi, and K20c. In the present study we report on the comparative study of finite element codes, using PETSC and AmgX solvers on CPU and GPUs, respectively [3,4]. We believe this study will be a good starting point for FEM code developers, who are contemplating a CPU to accelerator transition.

  14. SiGN-SSM: open source parallel software for estimating gene networks with state space models.

    Science.gov (United States)

    Tamada, Yoshinori; Yamaguchi, Rui; Imoto, Seiya; Hirose, Osamu; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru

    2011-04-15

    SiGN-SSM is an open-source gene network estimation software able to run in parallel on PCs and massively parallel supercomputers. The software estimates a state space model (SSM), that is a statistical dynamic model suitable for analyzing short time and/or replicated time series gene expression profiles. SiGN-SSM implements a novel parameter constraint effective to stabilize the estimated models. Also, by using a supercomputer, it is able to determine the gene network structure by a statistical permutation test in a practical time. SiGN-SSM is applicable not only to analyzing temporal regulatory dependencies between genes, but also to extracting the differentially regulated genes from time series expression profiles. SiGN-SSM is distributed under GNU Affero General Public Licence (GNU AGPL) version 3 and can be downloaded at http://sign.hgc.jp/signssm/. The pre-compiled binaries for some architectures are available in addition to the source code. The pre-installed binaries are also available on the Human Genome Center supercomputer system. The online manual and the supplementary information of SiGN-SSM is available on our web site. tamada@ims.u-tokyo.ac.jp.

  15. KNBD: A Remote Kernel Block Server for Linux

    Science.gov (United States)

    Becker, Jeff

    1999-01-01

    I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.

  16. Summary of multi-core hardware and programming model investigations

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Suzanne Marie; Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.

    2008-05-01

    This report summarizes our investigations into multi-core processors and programming models for parallel scientific applications. The motivation for this study was to better understand the landscape of multi-core hardware, future trends, and the implications on system software for capability supercomputers. The results of this study are being used as input into the design of a new open-source light-weight kernel operating system being targeted at future capability supercomputers made up of multi-core processors. A goal of this effort is to create an agile system that is able to adapt to and efficiently support whatever multi-core hardware and programming models gain acceptance by the community.

  17. Scientific visualization and radiology

    International Nuclear Information System (INIS)

    Lawrance, D.P.; Hoyer, C.E.; Wrestler, F.A.; Kuhn, M.J.; Moore, W.D.; Anderson, D.R.

    1989-01-01

    Scientific visualization is the visual presentation of numerical data. The National Center for Supercomputing Applications (NCSA) has developed methods for visualizing computerbased simulations of digital imaging data. The applicability of these various tools for unique and potentially medical beneficial display of MR images is investigated. Raw data are obtained from MR images of the brain, neck, spine, and brachial plexus obtained on a 1.5-T imager with multiple pulse sequences. A supercomputer and other mainframe resources run a variety of graphic and imaging programs using this data. An interdisciplinary team of imaging scientists, computer graphic programmers, an physicians works together to achieve useful information

  18. How General-Purpose can a GPU be?

    Directory of Open Access Journals (Sweden)

    Philip Machanick

    2015-12-01

    Full Text Available The use of graphics processing units (GPUs in general-purpose computation (GPGPU is a growing field. GPU instruction sets, while implementing a graphics pipeline, draw from a range of single instruction multiple datastream (SIMD architectures characteristic of the heyday of supercomputers. Yet only one of these SIMD instruction sets has been of application on a wide enough range of problems to survive the era when the full range of supercomputer design variants was being explored: vector instructions. This paper proposes a reconceptualization of the GPU as a multicore design with minimal exotic modes of parallelism so as to make GPGPU truly general.

  19. Conjugacy, orbit equivalence and classification of measure-preserving group actions

    DEFF Research Database (Denmark)

    Törnquist, Asger Dag

    2009-01-01

    We prove that if G is a countable discrete group with property (T) over an infinite subgroup HG which contains an infinite Abelian subgroup or is normal, then G has continuum-many orbit-inequivalent measure-preserving almost-everywhere-free ergodic actions on a standard Borel probability space...... and Weiss for conjugacy of measure-preserving ergodic almost-everywhere-free actions of discrete countable groups....

  20. To the complete integrability of long-wave short-wave interaction equations

    International Nuclear Information System (INIS)

    Roy Chowdhury, A.; Chanda, P.K.

    1984-10-01

    We show that the non-linear partial differential equations governing the interaction of long and short waves are completely integrable. The methodology we use is that of Ablowitz et al. though in the last section of our paper we have discussed the problem also in the light of the procedure due to Weiss et al. and have obtained a Baecklund transformation. (author)

  1. TSC Regulates Oligodendroglial Differentiation and Myelination in the CNS

    Science.gov (United States)

    2011-09-01

    Glutamate-Mediated Apoptosis and Trophic Factor Protection of Immature Oligodendrocytes” Weiss Center for Research, “Death and Survival in the...System, Cold Spring Harbor Laboratories 1994 - 1995 Medical Embryology , PSU/College of Medicine (4 lectures) 1994 – 2000 Molecular...Instructor, Medical Embryology , PSU/College of Medicine (8 lectures) 1997 Biology of Neoplasia (1 lecture) 1999 - 2004 Medical Histology, PSU/College

  2. Commutators of integral operators with variable kernels on Hardy ...

    Indian Academy of Sciences (India)

    [8] Coifman R R, Lions P L, Meyer Y and Semmes S, Compensated compactness and Hardy spaces, J. Math. Pure Appl. 72(3) (1993) 247–286. [9] Coifman R R, Rochberg R and Weiss G, Factorization theorems for Hardy spaces in several variable, Ann. Math. 103 (1976) 611–635. [10] Ding Y, Weak type bounds for a class ...

  3. The effect of Mn substitution on the structure and magnetic properties of Se(Cu sub 1 sub - sub x Mn sub x)O sub 3 solid solution

    CERN Document Server

    Escamilla, R; Rosales, M I; Moran, E C; Alario-Franco, M A

    2003-01-01

    The effects of Mn substitution on the structure and magnetic properties of the SeMO sub 3 (M = Cu sub 1 sub - sub x Mn sub x) solid solution have been studied. Rietveld refinements of the x-ray diffraction patterns of these samples indicate that the manganese ions occupy copper sites. This replacement induces significant changes in the M-O bond lengths that give rise to abrupt decreases of the [M-O sub 6] octahedral distortion. In contrast, the M-O(1)-M and M-O(2)-M bond angles remain essentially constant. The magnetic behaviour of this solid solution was studied in the temperature range of 2 K < T < 300 K. The temperature dependence of the inverse magnetic susceptibility is well described by the Curie-Weiss law at high temperatures, in the composition range studied. We found that the substitution of Mn for Cu induces a sharp drop in the saturation moment of SeCuO sub 3. At about 10% of Mn there is a change from positive to negative Weiss constant theta sub W that is mainly due to the [M-O sub 6] octahe...

  4. Magnetic susceptibilities of liquid Cr-Au, Mn-Au and Fe-Au alloys

    Energy Technology Data Exchange (ETDEWEB)

    Ohno, S.; Shimakura, H. [Niigata University of Pharmacy and Applied Life Sciences, Higashijima, Akiha-ku, Niigata 956-8603 (Japan); Tahara, S. [Faculty of Science, University of the Ryukyus, Nishihara-cho, Okinawa 903-0213 (Japan); Okada, T. [Niigata College of Technology, Kamishin’eicho, Nishi-ku, Niigata 950-2076 (Japan)

    2015-08-17

    The magnetic susceptibility of liquid Cr-Au, Mn-Au, Fe-Au and Cu-Au alloys was investigated as a function of temperature and composition. Liquid Cr{sub 1-c}Au{sub c} with 0.5 ≤ c and Mn{sub 1-c}Au{sub c} with 0.3≤c obeyed the Curie-Weiss law with regard to their dependence of χ on temperature. The magnetic susceptibilities of liquid Fe-Au alloys also exhibited Curie-Weiss behavior with a reasonable value for the effective number of Bohr magneton. On the Au-rich side, the composition dependence of χ for liquid TM-Au (TM=Cr, Mn, Fe) alloys increased rapidly with increasing TM content, respectively. Additionally, the composition dependences of χ for liquid Cr-Au, Mn-Au, and Fe-Au alloys had maxima at compositions of 50 at% Cr, 70 at% Mn, and 85 at% Fe, respectively. We compared the composition dependences of χ{sub 3d} due to 3d electrons for liquid binary TM-M (M=Au, Al, Si, Sb), and investigated the relationship between χ{sub 3d} and E{sub F} in liquid binary TM-M alloys at a composition of 50 at% TM.

  5. Influence of the symptoms of Attention Deficit Hyperactivity Disorder (ADHD) and comorbid disorders on functioning in adulthood.

    Science.gov (United States)

    Miranda, Ana; Berenguer, Carmen; Colomer, Carla; Roselló, Rocío

    2014-01-01

    ADHD is a chronic disorder that generally has a negative effect on socio-personal adaptation. The objectives of the current study were to examine the adaptive functioning in the daily lives of adults with ADHD compared to adults without the disorder and to test the influence of ADHD symptoms and comorbid problems on different areas of adaptive functioning. Seventy-seven adults between 17 and 24 years old, 40 with a clinical diagnosis of combined-subtype ADHD in childhood and 37 controls, filled out the Weiss Functional Impairment Scale, the Weiss Symptom Record and Conners' Adult ADHD Rating Scale. Significant differences were found between adults with and without ADHD in family and academic functioning. Moreover, the ADHD symptomatology as a whole predicted significant deficiencies in the family environment and self-concept, whereas inattention specifically predicted worse academic performance and life skills. The comorbidities mainly affected the family and risky activity domains (dangerous driving, illegal behaviors, substance misuse and sexually inappropriate behaviors). The results illustrate the importance of developing a multimodal approach to helping ADHD adults cope with associated comorbid disorders, offering them supportive coaching in organizing daily activities, and incorporating the family and/or partner in the treatment plan.

  6. NIC symposium 2012. 25 years HLRZ/NIC. Proceedings

    International Nuclear Information System (INIS)

    Binder, Kurt

    2012-01-01

    Since 25 years the John von Neumann Institute for Computing (NIC), the former ''Hoechstleistungsrechenzentrum'', plays a pioneering role in supporting research in computational science at the fore-front, by giving large grants of computer time to carefully selected research projects. The scope of these projects ranges from fundamental aspects of physics, such as the physics of elementary particles and nuclear physics, astrophysics, statistical physics and physics of condensed matter, computational chemistry and life sciences, to more applied areas of research, such as the modelling of processes in the atmosphere, materials science, fluid dynamics applications in engineering, etc. Use of the supercomputer resources that the Juelich Supercomputing Centre (JSC) provides for these research projects. The present book, which appears in the framework of the biannual NIC Symposia series, continues a tradition started 10 years ago, to present selected highlights of this research to a broader audience. Due to space restrictions, only a small number of the research projects that are carried out at the NIC can be presented in this way. Projects that stand out as particularly excellent are nominated as ''John von Neumann Excellence Project'' by the review board. In 2010 this award was given to A. Muramatsu (Stuttgart) for his project on ''Quantum Monte Carlo studies of strongly correlated systems''. In 2011, two such awards were given to C. Hoelbling (Wuppertal) for his project ''Computing B K with 2+1 flavours at the physical mass point'', and another one to W. Paul (Halle) for ''Long range correlations at polymer-solid interfaces''. The procedures adopted by the NIC to identify the scientifically best projects for the allocation of computer time are of the same character as those used by organisations founded more recently, such as (in Germany) the Gauss Centre for Supercomputing (GCS), an alliance of the three German national supercomputing centres in Juelich, Garching and

  7. NIC symposium 2012. 25 years HLRZ/NIC. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Kurt [Mainz Univ. (Germany). Inst. fuer Physik; Muenster, Gernot [Muenster Univ. (Germany). Inst. fuer Theoretische Physik 1; Kremer, Manfred [Forschungszentrum Juelich GmbH (DE). Juelich Supercomputing Centre (JSC)

    2012-08-07

    Since 25 years the John von Neumann Institute for Computing (NIC), the former ''Hoechstleistungsrechenzentrum'', plays a pioneering role in supporting research in computational science at the fore-front, by giving large grants of computer time to carefully selected research projects. The scope of these projects ranges from fundamental aspects of physics, such as the physics of elementary particles and nuclear physics, astrophysics, statistical physics and physics of condensed matter, computational chemistry and life sciences, to more applied areas of research, such as the modelling of processes in the atmosphere, materials science, fluid dynamics applications in engineering, etc. Use of the supercomputer resources that the Juelich Supercomputing Centre (JSC) provides for these research projects. The present book, which appears in the framework of the biannual NIC Symposia series, continues a tradition started 10 years ago, to present selected highlights of this research to a broader audience. Due to space restrictions, only a small number of the research projects that are carried out at the NIC can be presented in this way. Projects that stand out as particularly excellent are nominated as ''John von Neumann Excellence Project'' by the review board. In 2010 this award was given to A. Muramatsu (Stuttgart) for his project on ''Quantum Monte Carlo studies of strongly correlated systems''. In 2011, two such awards were given to C. Hoelbling (Wuppertal) for his project ''Computing B{sub K} with 2+1 flavours at the physical mass point'', and another one to W. Paul (Halle) for ''Long range correlations at polymer-solid interfaces''. The procedures adopted by the NIC to identify the scientifically best projects for the allocation of computer time are of the same character as those used by organisations founded more recently, such as (in Germany) the Gauss Centre for Supercomputing (GCS), an alliance of the three German national supercomputing centres in Juelich, Garching

  8. NIC symposium 2012. 25 years HLRZ/NIC. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Kurt [Mainz Univ. (Germany). Inst. fuer Physik; Muenster, Gernot [Muenster Univ. (Germany). Inst. fuer Theoretische Physik 1; Kremer, Manfred (eds.) [Forschungszentrum Juelich GmbH (DE). Juelich Supercomputing Centre (JSC)

    2012-08-07

    Since 25 years the John von Neumann Institute for Computing (NIC), the former ''Hoechstleistungsrechenzentrum'', plays a pioneering role in supporting research in computational science at the fore-front, by giving large grants of computer time to carefully selected research projects. The scope of these projects ranges from fundamental aspects of physics, such as the physics of elementary particles and nuclear physics, astrophysics, statistical physics and physics of condensed matter, computational chemistry and life sciences, to more applied areas of research, such as the modelling of processes in the atmosphere, materials science, fluid dynamics applications in engineering, etc. Use of the supercomputer resources that the Juelich Supercomputing Centre (JSC) provides for these research projects. The present book, which appears in the framework of the biannual NIC Symposia series, continues a tradition started 10 years ago, to present selected highlights of this research to a broader audience. Due to space restrictions, only a small number of the research projects that are carried out at the NIC can be presented in this way. Projects that stand out as particularly excellent are nominated as ''John von Neumann Excellence Project'' by the review board. In 2010 this award was given to A. Muramatsu (Stuttgart) for his project on ''Quantum Monte Carlo studies of strongly correlated systems''. In 2011, two such awards were given to C. Hoelbling (Wuppertal) for his project ''Computing B{sub K} with 2+1 flavours at the physical mass point'', and another one to W. Paul (Halle) for ''Long range correlations at polymer-solid interfaces''. The procedures adopted by the NIC to identify the scientifically best projects for the allocation of computer time are of the same character as those used by organisations founded more recently, such as (in Germany) the Gauss Centre for

  9. 2016 ALCF Science Highlights

    Energy Technology Data Exchange (ETDEWEB)

    Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Wolf, Laura [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  10. 2015 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  11. 2014 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  12. Ab initio molecular dynamics simulations for the role of hydrogen in catalytic reactions of furfural on Pd(111)

    Science.gov (United States)

    Xue, Wenhua; Dang, Hongli; Liu, Yingdi; Jentoft, Friederike; Resasco, Daniel; Wang, Sanwu

    2014-03-01

    In the study of catalytic reactions of biomass, furfural conversion over metal catalysts with the presence of hydrogen has attracted wide attention. We report ab initio molecular dynamics simulations for furfural and hydrogen on the Pd(111) surface at finite temperatures. The simulations demonstrate that the presence of hydrogen is important in promoting furfural conversion. In particular, hydrogen molecules dissociate rapidly on the Pd(111) surface. As a result of such dissociation, atomic hydrogen participates in the reactions with furfural. The simulations also provide detailed information about the possible reactions of hydrogen with furfural. Supported by DOE (DE-SC0004600). This research used the supercomputer resources of the XSEDE, the NERSC Center, and the Tandy Supercomputing Center.

  13. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  14. The Cure for HPC Neurosis: Multiple, Virtual Personalities!

    Energy Technology Data Exchange (ETDEWEB)

    Farber, Rob

    2007-06-30

    The selection of a new supercomputer for a scientific data center represents an interesting neurotic condition stemming from the conflict between a compulsion to acquire the best of the latest generation computer hardware, and unresolved issues as users seek validation from legacy scientific software - sometimes euphemistically called “research quality code”. Virtualization technology, now a mainstream feature on modern processors, permits multiple operating systems to efficiently and simultaneously run on each node of a supercomputer (or even your laptop and workstation). The benefits of this technology are many, ranging from supporting legacy software to paving the way towards robust petascale (1015 floating point operations per second) and eventually exascale (1018 floating point operations per second) computing.

  15. 1984 CERN school of computing

    International Nuclear Information System (INIS)

    1985-01-01

    The eighth CERN School of Computing covered subjects mainly related to computing for elementary-particle physics. These proceedings contain written versions of most of the lectures delivered at the School. Notes on the following topics are included: trigger and data-acquisition plans for the LEP experiments; unfolding methods in high-energy physics experiments; Monte Carlo techniques; relational data bases; data networks and open systems; the Newcastle connection; portable operating systems; expert systems; microprocessors - from basic chips to complete systems; algorithms for parallel computers; trends in supercomputers and computational physics; supercomputing and related national projects in Japan; application of VLSI in high-energy physics, and single-user systems. See hints under the relevant topics. (orig./HSI)

  16. Berlinale Kuldkaru jagunes kahe filmi vahel / Andres Laasik

    Index Scriptorium Estoniae

    Laasik, Andres, 1960-2016

    2002-01-01

    Berliini 52. filmifestivali tulemustest. Festivali "Panorama Special" kavas olnud Peeter Simmi "Head käed" saavutas hea publikuedu ning jagas sloveenlase Maja Weiss'i filmiga "Piirivalvur" Manfred Salzgeberi auhinda, mis on mõeldud subtiitrite tegemiseks filmi levitamisel euroopa ekraanidele. Kuldkaru jagasid briti Paul Greengrassi "Verine pühapäev" ja jaapanlase Hayao Miyazaki animafilm "Vaimudest viidud". Elutöö-Kuldkarud said Claudia Cardinale ja Robert Altman

  17. USAF/SCEEE Graduate Student Summer Research Program (1984). Program Management Report. Volume 1.

    Science.gov (United States)

    1984-10-01

    Force Spouse Survey -: 76 Median Filter Enhancement Kevin J. Verfaille for Computer Recognition 77 Raman Spectroscopy Studies Michael Wager of...oxidized derivitives,the prostaglandins, thromboxanes and related compounds. 3. Determine the effects of neurotransuitters, analogues and inhibitors...adult pig cerebellum and adult pig whole brain cortex. Lipids 9: 756-764.A 13. Kennedy, E.P. and Weiss, S.B. (1956) The function of cytidine coen

  18. Balancing Scientific Publication and National Security Concerns: Issues for Congress

    Science.gov (United States)

    2003-07-09

    10 Rick Weiss, “Polio-Causing Virus Created in N.Y. Lab: Made-From-Scratch Pathogen Prompts Concerns About Bioethics , Terrorism,” The Washington...Human Services or with the Department of Agriculture , depending on the nature of the select agent. Most universities generally reconcile their dual roles...economic, human, financial, industrial, agricultural , technological, and law enforcement information, as well as the privacy or confidentiality of

  19. Mean-field approximation minimizes relative entropy

    International Nuclear Information System (INIS)

    Bilbro, G.L.; Snyder, W.E.; Mann, R.C.

    1991-01-01

    The authors derive the mean-field approximation from the information-theoretic principle of minimum relative entropy instead of by minimizing Peierls's inequality for the Weiss free energy of statistical physics theory. They show that information theory leads to the statistical mechanics procedure. As an example, they consider a problem in binary image restoration. They find that mean-field annealing compares favorably with the stochastic approach

  20. Laboratory open-quotes proof of principleclose quotes investigation for the acoustically enhanced remediation technology

    International Nuclear Information System (INIS)

    Iovenitti, J.L.; Spencer, J.W.; Hill, D.G.

    1995-01-01

    This document describes a three phase program of Weiss Associates which investigates the systematics of using acoustic excitation fields (AEFs) to enhance the in-situ remediation of contaminated soil and ground water under both saturated and unsaturated conditions. The focus in this particular paper is a laboratory proof of principle investigation. The field deployment and engineering viability of acoustically enhanced remediation technology is also examined

  1. The Global Economic Crisis: Impact on Sub-Saharan Africa and Global Policy Responses

    Science.gov (United States)

    2010-04-06

    financial assistance to Africa is provided through the IMF’s concessional lending facilities, the Poverty Reduction and Growth Facility ( PRGF ) and...Notes: Amounts are the total amount of outstanding PRGF and ESF loans to African countries at the end of April for each year...Report RS22534, The Multilateral Debt Relief Initiative, by Martin A. Weiss. 107 PRGF loans are intended to help low-income countries address

  2. Naval Law Review, Volume 54, 2007

    Science.gov (United States)

    2007-01-01

    discharged soldier wrote numerous public officials a secondhand account of the events. In November 1969, the Army initiated an investigation. In...later book detail the killings of Vietnamese civilians (elderly men, women, and children) by an elite Army unit. The series discusses the events...Major Hawkins’ alleged murders and recommended an Article 32 investigation. As of the May 2006 publication of the book , Sallah and Weiss indicate

  3. Magnetic susceptibility of Gd/sub 3/Ga/sub 2/

    Energy Technology Data Exchange (ETDEWEB)

    Hacker, H Jr; Gupta, R M [Duke Univ., Durham, N.C. (USA). Dept. of Electrical Engineering

    1976-03-01

    The magnetic susceptibility of the intermetallic compound Gd/sub 3/Ga/sub 2/ has been measured by the Faraday method over the range 8 - 300 K. The data indicate antiferromagnetic behavior below 53 K. Above 100 K, the mass susceptibility obeys the Curie-Weiss law, chisub(g)=4.45X10/sup -2//(T+23)emu/gOe. The corresponding effective moment is 8.51 Bohr magnetons.

  4. Computing handbook computer science and software engineering

    CERN Document Server

    Gonzalez, Teofilo; Tucker, Allen

    2014-01-01

    Overview of Computer Science Structure and Organization of Computing Peter J. DenningComputational Thinking Valerie BarrAlgorithms and Complexity Data Structures Mark WeissBasic Techniques for Design and Analysis of Algorithms Edward ReingoldGraph and Network Algorithms Samir Khuller and Balaji RaghavachariComputational Geometry Marc van KreveldComplexity Theory Eric Allender, Michael Loui, and Kenneth ReganFormal Models and Computability Tao Jiang, Ming Li, and Bala

  5. Electron spin resonance study of the demagnetization fields of the ferromagnetic and paramagnetic films

    Directory of Open Access Journals (Sweden)

    I.I. Gimazov, Yu.I. Talanov

    2015-12-01

    Full Text Available The results of the electron spin resonance study of the La1-xCaxMnO3 manganite and the diphenyl-picrylhydrazyl thin films for the magnetic field parallel and perpendicular to plane of the films are presented. The temperature dependence of the demagnetizing field is obtained. The parameters of the Curie-Weiss law are estimated for the paramagnetic thin film.

  6. Bringing ATLAS production to HPC resources. A case study with SuperMuc and Hydra

    Energy Technology Data Exchange (ETDEWEB)

    Duckeck, Guenter; Walker, Rodney [LMU Muenchen (Germany); Kennedy, John; Mazzaferro, Luca [RZG Garching (Germany); Kluth, Stefan [Max-Planck-Institut fuer Physik, Muenchen (Germany); Collaboration: ATLAS-Collaboration

    2015-07-01

    The possible usage of Supercomputer systems or HPC resources by ATLAS is now becoming viable due to the changing nature of these systems and it is also very attractive due to the need for increasing amounts of simulated data. The ATLAS experiment at CERN will begin a period of high luminosity data taking in 2015. The corresponding need for simulated data might potentially exceed the capabilities of the current Grid infrastructure. ATLAS aims to address this need by opportunistically accessing resources such as cloud and HPC systems. This contribution presents the results of two projects undertaken by LMU/LRZ and MPP/RZG to use the supercomputer facilities SuperMuc (LRZ) and Hydra (RZG). Both are Linux based supercomputers in the 100 k CPU-core category. The integration of such HPC resources into the ATLAS production system poses many challenges. Firstly, established techniques and features of standard WLCG operation are prohibited or much restricted on HPC systems, e.g. Grid middleware, software installation, outside connectivity, etc. Secondly, efficient use of available resources requires massive multi-core jobs, back-fill submission and check-pointing. We discuss the customization of these components and the strategies for HPC usage as well as possibilities for future directions.

  7. Ranking metrics in gene set enrichment analysis: do they matter?

    Science.gov (United States)

    Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna

    2017-05-12

    There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner-Weiss

  8. NETL Super Computer

    Data.gov (United States)

    Federal Laboratory Consortium — The NETL Super Computer was designed for performing engineering calculations that apply to fossil energy research. It is one of the world’s larger supercomputers,...

  9. Sadhana | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    ... VLSI clock interconnects; delay variability; PDF; process variation; Gaussian random ... Supercomputer Education and Research Centre, Indian Institute of Science, ... Manuscript received: 27 February 2009; Manuscript revised: 9 February ...

  10. A Scheduling-Based Framework for Efficient Massively Parallel Execution, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The barrier to entry creating efficient, scalable applications for heterogeneous supercomputing environments is too high. EM Photonics has found that the majority of...

  11. Oak Ridge Leadership Computing Facility (OLCF)

    Data.gov (United States)

    Federal Laboratory Consortium — The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of standing up a supercomputer 100 times...

  12. Metabolomics Workbench (MetWB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Metabolomics Program's Data Repository and Coordinating Center (DRCC), housed at the San Diego Supercomputer Center (SDSC), University of California, San Diego,...

  13. Mida tähendab meile holokaust? : võrdlevalt Ameerikast ja Eestist / Anton Weiss-Wendt

    Index Scriptorium Estoniae

    Weiss-Wendt, Anton

    2001-01-01

    Holokausti tähendusele pühendatud Ameerikas hiljuti ilmunud raamatutest : Novick, Peter. Holocaust in American Life ; Cole, Tim. Selling the Holocaust: from Auschwitz to Schindler. How history is Bought, Packaged, and Sold ; Finkelstein, Norman. The Holocaust Industry: Reflection on the Exploitation of Jewish Suffering

  14. First-principles quantum-mechanical investigations: The role of water in catalytic conversion of furfural on Pd(111)

    Science.gov (United States)

    Xue, Wenhua; Borja, Miguel Gonzalez; Resasco, Daniel E.; Wang, Sanwu

    2015-03-01

    In the study of catalytic reactions of biomass, furfural conversion over metal catalysts with the presence of water has attracted wide attention. Recent experiments showed that the proportion of alcohol product from catalytic reactions of furfural conversion with palladium in the presence of water is significantly increased, when compared with other solvent including dioxane, decalin, and ethanol. We investigated the microscopic mechanism of the reactions based on first-principles quantum-mechanical calculations. We particularly identified the important role of water and the liquid/solid interface in furfural conversion. Our results provide atomic-scale details for the catalytic reactions. Supported by DOE (DE-SC0004600). This research used the supercomputer resources at NERSC, of XSEDE, at TACC, and at the Tandy Supercomputing Center.

  15. [High energy particle physics]: Progress report covering the five year period from August 1, 1984 to May 31, 1989 with special emphasis for the period of August 1, 1988 to May 31, 1989: Part 1

    International Nuclear Information System (INIS)

    1989-01-01

    In this document the High Energy Physics group reviews its accomplishments and progress during the past five years, with special emphasis for the past year and presents plans for continuing research during the next several years. During the last few years the effort of the experimental group has been divided approximately equally between fixed target physics and preparations for future collider experiments. The main emphasis of the theory group has been in the area of strong and electroweak phenomenology with an emphasis on hard scattering processes. With the recent creation of the Supercomputer Computations Research Institute, some work has also been done in the area of numerical simulations of condensed matter spin models and techniques for implementing numerical simulations on supercomputers

  16. DCA++: A case for science driven application development for leadership computing platforms

    International Nuclear Information System (INIS)

    Summers, Michael S; Alvarez, Gonzalo; Meredith, Jeremy; Maier, Thomas A; Schulthess, Thomas C

    2009-01-01

    The DCA++ code was one of the early science applications that ran on jaguar at the National Center for Computational Sciences, and the first application code to sustain a petaflop/s under production conditions on a general-purpose supercomputer. The code implements a quantum cluster method with a Quantum Monte Carlo kernel to solve the 2D Hubbard model for high-temperature superconductivity. It is implemented in C++, making heavy use of the generic programming model. In this paper, we discuss how this code was developed, reaching scalability and high efficiency on the world's fastest supercomputer in only a few years. We show how the use of generic concepts combined with systematic refactoring of codes is a better strategy for computational sciences than a comprehensive upfront design.

  17. The TESS Science Processing Operations Center

    Science.gov (United States)

    Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd; hide

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover approximately 1,000 small planets with R(sub p) less than 4 (solar radius) and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).

  18. Study of RE-garnets using BPW method

    Science.gov (United States)

    Goveas, Neena; Mukhopadhyay, P.; Mukhopadhyay, G.

    1995-02-01

    The magnetic susceptibility of rare-earth (Y and Lu) iron garnets is studied using a modified Bethe-Peierls-Weiss (BPW) approximation. The modifications enable us to incorporate the three exchange parameters Jad, Jaa and Jdd necessary to describe the systems. We get excellent fits to the experimental susceptibilities from which we determined the J-values. These also give excellent agreement with the spin wave dispersion relation constant D.

  19. Use of a Vaccinia Construct Expressing the Circumsporozoite Protein in the Analysis of Protective Immunity to Plasmodium yoelii

    Science.gov (United States)

    1988-01-01

    William R. Majarian, 2 ,5 Frank A. Robey, 3 Walter Weiss, 1 and Stephen L. Hoffman 1 lInfectious Diseases Department, Naval Medical Research Institute...autoradiography. Recombinant viruses which were positive in this assay were subject to 3 rounds of plaque purification. Finally, plaque purified virus was...mechanisms in the protective immunity elicited by inmunization with irradiated sporozoites (3,7,8,9). In an attempt to induce a protective cellular immune

  20. "Head käed" tõid Berlinalelt auhinna / Tiit Tuumalu

    Index Scriptorium Estoniae

    Tuumalu, Tiit, 1971-

    2002-01-01

    Berliini 52. filmifestivali tulemustest. Festivali "Panorama Special" kavas olnud Peeter Simmi "Head käed" saavutas hea publikuedu ning jagas sloveenlase Maja Weiss'i filmiga "Piirivalvur" Manfred Salzgeberi auhinda, mis on mõeldud subtiitrite tegemiseks filmi levitamisel euroopa ekraanidele. Kuldkaru jagasid briti Paul Greengrassi "Verine pühapäev" ja jaapanlase Hayao Miyazaki animafilm "Vaimudest viidud". Parima režissööri Hõbekaru sai Otar Ioseliani