WorldWideScience

Sample records for supercomputing conference sponsored

  1. 41 CFR 301-74.8 - Who may authorize reimbursement of the conference lodging allowance for a Government sponsored...

    Science.gov (United States)

    2010-07-01

    ...? The approval authority for the conference lodging allowance is the Government agency sponsoring the conference. The sponsoring agency will determine the appropriate conference lodging allowance, up to 25... a senior agency official at the sponsoring agency. ...

  2. International Conference Nuclear Theory in the Supercomputing Era 2014

    CERN Document Server

    2014-01-01

    The conference focuses on forefront challenges in physics, namely the fundamentals of nuclear structure and reactions, the origin of the strong inter-nucleon interactions from QCD, and computational nuclear physics with leadership class computer facilities to provide forefront simulations leading to new discoveries.This is the fourth in the series of NTSE-HITES conferences aimed to bring together nuclear theorists, computer scientists and applied mathematicians.

  3. Computational Dimensionalities of Global Supercomputing

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2013-12-01

    Full Text Available This Invited Paper pertains to subject of my Plenary Keynote Speech at the 17th World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI 2013 held in Orlando, Florida on July 9-12, 2013. The title of my Plenary Keynote Speech was: "Dimensionalities of Computation: from Global Supercomputing to Data, Text and Web Mining" but this Invited Paper will focus only on the "Computational Dimensionalities of Global Supercomputing" and is based upon a summary of the contents of several individual articles that have been previously written with myself as lead author and published in [75], [76], [77], [78], [79], [80] and [11]. The topics of these of the Plenary Speech included Overview of Current Research in Global Supercomputing [75], Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing [76], Data Mining Supercomputing with SAS™ JMP® Genomics ([77], [79], [80], and Visualization by Supercomputing Data Mining [81]. ______________________ [11.] Committee on the Future of Supercomputing, National Research Council (2003, The Future of Supercomputing: An Interim Report, ISBN-13: 978-0-309-09016- 2, http://www.nap.edu/catalog/10784.html [75.] Segall, Richard S.; Zhang, Qingyu and Cook, Jeffrey S.(2013, "Overview of Current Research in Global Supercomputing", Proceedings of Forty- Fourth Meeting of Southwest Decision Sciences Institute (SWDSI, Albuquerque, NM, March 12-16, 2013. [76.] Segall, Richard S. and Zhang, Qingyu (2010, "Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing", Proceedings of 5th INFORMS Workshop on Data Mining and Health Informatics, Austin, TX, November 6, 2010. [77.] Segall, Richard S., Zhang, Qingyu and Pierce, Ryan M.(2010, "Data Mining Supercomputing with SAS™ JMP®; Genomics: Research-in-Progress, Proceedings of 2010 Conference on Applied Research in Information Technology, sponsored by

  4. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  5. World's fastest supercomputer opens up to users

    Science.gov (United States)

    Xin, Ling

    2016-08-01

    China's latest supercomputer - Sunway TaihuLight - has claimed the crown as the world's fastest computer according to the latest TOP500 list, released at the International Supercomputer Conference in Frankfurt in late June.

  6. Supercomputing '91; Proceedings of the 4th Annual Conference on High Performance Computing, Albuquerque, NM, Nov. 18-22, 1991

    Science.gov (United States)

    1991-01-01

    Various papers on supercomputing are presented. The general topics addressed include: program analysis/data dependence, memory access, distributed memory code generation, numerical algorithms, supercomputer benchmarks, latency tolerance, parallel programming, applications, processor design, networks, performance tools, mapping and scheduling, characterization affecting performance, parallelism packaging, computing climate change, combinatorial algorithms, hardware and software performance issues, system issues. (No individual items are abstracted in this volume)

  7. 41 CFR 301-74.14 - Are there any special requirements for sponsoring or funding a conference at a hotel, motel or...

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 4 2010-07-01 2010-07-01 false Are there any special requirements for sponsoring or funding a conference at a hotel, motel or other place of public accommodation... Responsibilities § 301-74.14 Are there any special requirements for sponsoring or funding a conference at a...

  8. Supercomputational science

    CERN Document Server

    Wilson, S

    1990-01-01

    In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School ...

  9. Sandia`s network for supercomputing `95: Validating the progress of Asynchronous Transfer Mode (ATM) switching

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, T.J.; Vahle, O.; Gossage, S.A.

    1996-04-01

    The Advanced Networking Integration Department at Sandia National Laboratories has used the annual Supercomputing conference sponsored by the IEEE and ACM for the past three years as a forum to demonstrate and focus communication and networking developments. For Supercomputing `95, Sandia elected: to demonstrate the functionality and capability of an AT&T Globeview 20Gbps Asynchronous Transfer Mode (ATM) switch, which represents the core of Sandia`s corporate network, to build and utilize a three node 622 megabit per second Paragon network, and to extend the DOD`s ACTS ATM Internet from Sandia, New Mexico to the conference`s show floor in San Diego, California, for video demonstrations. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations supports Sandia`s overall strategies in ATM networking.

  10. TOP500 Supercomputers for June 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  11. Estimating the Benefits of Government-Sponsored Energy R&D: Synthesis of Conference Discussions

    Energy Technology Data Exchange (ETDEWEB)

    Lee, R.

    2003-11-14

    In 2001, a National Research Council (NRC) committee conducted a retrospective study of the benefits of some of the energy efficiency and fossil energy programs in the U.S. Department of Energy (DOE). As part of its study, the NRC committee developed a methodological framework for estimating these benefits. Following the NRC report, a conference was organized by Oak Ridge National Laboratory to discuss ways of adapting and refining the NRC framework for possible use by DOE offices to help plan and manage their R&D. This report is a synthesis of the discussions at the conference.

  12. 1973 Winter Simulation Conference. Sponsored by ACM/AIIE/SHARE/SCi/TIMS.

    Science.gov (United States)

    Hoggatt, Austin Curwood, Ed.

    A record of the current state of the art of simulation and the major part it now plays in policy formation in large organizations is provided by these conference proceedings. The 40 papers presented reveal an emphasis on the applications of simulation. In addition, the abstracts of 28 papers submitted to a more informal "paper fair" are also…

  13. Combining density functional theory calculations, supercomputing, and data-driven methods to design new materials (Conference Presentation)

    Science.gov (United States)

    Jain, Anubhav

    2017-04-01

    Density functional theory (DFT) simulations solve for the electronic structure of materials starting from the Schrödinger equation. Many case studies have now demonstrated that researchers can often use DFT to design new compounds in the computer (e.g., for batteries, catalysts, and hydrogen storage) before synthesis and characterization in the lab. In this talk, I will focus on how DFT calculations can be executed on large supercomputing resources in order to generate very large data sets on new materials for functional applications. First, I will briefly describe the Materials Project, an effort at LBNL that has virtually characterized over 60,000 materials using DFT and has shared the results with over 17,000 registered users. Next, I will talk about how such data can help discover new materials, describing how preliminary computational screening led to the identification and confirmation of a new family of bulk AMX2 thermoelectric compounds with measured zT reaching 0.8. I will outline future plans for how such data-driven methods can be used to better understand the factors that control thermoelectric behavior, e.g., for the rational design of electronic band structures, in ways that are different from conventional approaches.

  14. Co-sponsored second quarter progress review conference on district heating

    Energy Technology Data Exchange (ETDEWEB)

    None

    1980-01-01

    A summary of the progress review conference on district heating and cooling systems is presented. The agenda and lists of speakers and attendees are presented. A history of district heating and some present needs and future policies are given and an excerpt from the National District Heating Program Strategy (DOE, March 1980) is included. Following the presentation, District Heating and Cooling Systems Program, by Alan M. Rubin, a fact sheet on DOE's Integrated Community Energy Systems Program and information from an oral presentation, District Heating and Cooling Systems for Communities Through Power Plant Retrofit Distribution Network, are given. The Second Quarterly Oral Report to the US DOE on the District Heating and Cooling Project in Detroit; the executive summary of the Piqua, Ohio District Heating and Cooling Demonstration Project; the Second Quarterly Report of the Moorehead, Minnesota District Heating Project; and the report from the Moorehead, Minnesota mayor on the Hot Water District Heating Project are presented.

  15. [Sponsoring of medical conferences, workshops and symposia by pharmaceutical companies. Physicians must be wary of this!].

    Science.gov (United States)

    Warntjen, M

    2009-12-01

    The longstanding conventional forms of cooperation between medical organizations and physicians on the one hand and the pharmaceutical industry and manufacturers of medical products on the other hand nowadays hold the risk of coming into conflict with the public prosecutor. Typical circumstances which are taken up by the investigating authorities are financial supports of medical conferences, workshops and symposia. To understand the problem under criminal law it is important to become acquainted with the protective aim of the statutory offences of the acceptance of benefits according to section sign 331 of the Penal Code (Strafgesetzbuch, StGB) and of corruption according to section sign 332 of the Penal Code. The "trust of the general public in the objectivity of governmental decisions" must be protected and the "evil appearance of the corruptibility of official acts" must be counteracted. A basic differentiation is made between physicians with and without office-bearing functions. By paying attention to the recommendations and basic principles of cooperation between the medical profession and the healthcare industry presented in this article (transparency principle, equivalence principle, documentation principle and separation principle) the emergence of any suspicious factors can be effectively avoided.

  16. TOP500 Supercomputers for June 2005

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  17. TOP500 Supercomputers for November 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  18. The TeraGyroid Experiment – Supercomputing 2003

    Directory of Open Access Journals (Sweden)

    R.J. Blake

    2005-01-01

    Full Text Available Amphiphiles are molecules with hydrophobic tails and hydrophilic heads. When dispersed in solvents, they self assemble into complex mesophases including the beautiful cubic gyroid phase. The goal of the TeraGyroid experiment was to study defect pathways and dynamics in these gyroids. The UK's supercomputing and USA's TeraGrid facilities were coupled together, through a dedicated high-speed network, into a single computational Grid for research work that peaked around the Supercomputing 2003 conference. The gyroids were modeled using lattice Boltzmann methods with parameter spaces explored using many 1283 and 3grid point simulations, this data being used to inform the world's largest three-dimensional time dependent simulation with 10243-grid points. The experiment generated some 2 TBytes of useful data. In terms of Grid technology the project demonstrated the migration of simulations (using Globus middleware to and fro across the Atlantic exploiting the availability of resources. Integration of the systems accelerated the time to insight. Distributed visualisation of the output datasets enabled the parameter space of the interactions within the complex fluid to be explored from a number of sites, informed by discourse over the Access Grid. The project was sponsored by EPSRC (UK and NSF (USA with trans-Atlantic optical bandwidth provided by British Telecommunications.

  19. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee

    2011-11-15

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  20. The Student and the System. (Proceedings of the Conference Sponsored by the Graduate Students' Association, OISE, November 1969.)

    Science.gov (United States)

    Rusk, Bruce, Ed.; And Others

    The purpose of the conference was to explore the common elements in the educational system at the secondary and post secondary levels, focusing on the changing role of the student in the system. The proceedings consist of: (1) two talks delivered at the conference: "The Student and the System," by Edgar Friedenberg, and "Demythologizing the…

  1. Emerging supercomputer architectures

    Energy Technology Data Exchange (ETDEWEB)

    Messina, P.C.

    1987-01-01

    This paper will examine the current and near future trends for commercially available high-performance computers with architectures that differ from the mainstream ''supercomputer'' systems in use for the last few years. These emerging supercomputer architectures are just beginning to have an impact on the field of high performance computing. 7 refs., 1 tab.

  2. NSF Commits to Supercomputers.

    Science.gov (United States)

    Waldrop, M. Mitchell

    1985-01-01

    The National Science Foundation (NSF) has allocated at least $200 million over the next five years to support four new supercomputer centers. Issues and trends related to this NSF initiative are examined. (JN)

  3. Parental Leave: Options for Working Parents. A Report of a Conference Sponsored by the Association of Junior Leagues (March 1985).

    Science.gov (United States)

    Orr, Sally; Haskett, George

    This conference report addresses the issue of parental leave, particularly maternity leave at childbirth and parenting leaves for fathers and mothers after childbirth. Growing interest in this area is attributed to the dramatic change over the past 10 years in the labor force behavior of women. Currently existing national and employer policies for…

  4. Intensive care management of patients with liver disease: proceedings of a single-topic conference sponsored by the Brazilian Society of Hepatology.

    Science.gov (United States)

    Bittencourt, Paulo Lisboa; Terra, Carlos; Parise, Edison Roberto; Farias, Alberto Queiroz; Arroyo, Vincent; Fernandez, Javier; Pereira, Gustavo; Maubouisson, Luiz Marcelo; Andrade, Guilherme Marques; Costa, Fernando Gomes de Barros; Codes, Liana; Andrade, Antônio Ricardo; Matos, Angelo; Torres, André; Couto, Fernanda; Zyngier, Ivan

    2015-12-01

    Survival rates of critically ill patients with liver disease has sharply increased in recent years due to several improvements in the management of decompensated cirrhosis and acute liver failure. This is ascribed to the incorporation of evidence-based strategies from clinical trials aiming to reduce mortality. In order to discuss the cutting-edge evidence regarding critical care of patients with liver disease, a joint single topic conference was recently sponsored by the Brazilian Society of Hepatology in cooperation with the Brazilian Society of Intensive Care Medicine and the Brazilian Association for Organ Transplantation. This paper summarizes the proceedings of the aforementioned meeting and it is intended to guide intensive care physicians, gastroenterologists and hepatologists in the care management of patients with liver disease.

  5. Ending war against women. CRLP sponsors workshop on violence against women in situations of armed conflict during Beijing + 5 regional conference.

    Science.gov (United States)

    Molloy, J

    2000-03-01

    Sexual violence during armed conflict has been the primary concern in conferences and meetings of international organizations. It has been rightly viewed as a war crime and a violation of women's human rights. In the Economic Commission for Europe conference in January 2000, the issue was discussed extensively. For the part of the Center for Reproductive Law and Policy and the International Women's Health Coalition, they sponsored a workshop addressing women's sexual and reproductive rights in situations of armed conflict. Participants of the workshop shared experiences from the conflicts in Eastern Europe. Reports indicated that affected women experienced rape and domestic violence, and that trafficking of women has escalated. Compounding these factors has decreased the government funding for contraception, abortion, and health education. In addition, the panelists suggested that women's reproductive health and rights could be improved with greater mental and physical health services and stronger social support during wartime. They further recommended that peacekeeping personnel and others pay closer attention to who is perpetrating the violence against women so that prosecutions can take place after the conflict has ended. Moreover, international relief workers should also work to build capacity of local health personnel to meet women's health needs throughout the transition period.

  6. Adventures in Supercomputing: An innovative program

    Energy Technology Data Exchange (ETDEWEB)

    Summers, B.G.; Hicks, H.R.; Oliver, C.E.

    1995-06-01

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology and serve as a spur to systemic reform. The Adventures in Supercomputing (AiS) program, sponsored by the Department of Energy, is such a program. Adventures in Supercomputing is a program for high school and middle school teachers. It has helped to change the teaching paradigm of many of the teachers involved in the program from a teacher-centered classroom to a student-centered classroom. ``A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode``. Not only is the process of teaching changed, but evidences of systemic reform are beginning to surface. After describing the program, the authors discuss the teaching strategies being used and the evidences of systemic change in many of the AiS schools in Tennessee.

  7. Energy sciences supercomputing 1990

    Energy Technology Data Exchange (ETDEWEB)

    Mirin, A.A.; Kaiper, G.V. (eds.)

    1990-01-01

    This report contains papers on the following topics: meeting the computational challenge; lattice gauge theory: probing the standard model; supercomputing for the superconducting super collider; and overview of ongoing studies in climate model diagnosis and intercomparison; MHD simulation of the fueling of a tokamak fusion reactor through the injection of compact toroids; gyrokinetic particle simulation of tokamak plasmas; analyzing chaos: a visual essay in nonlinear dynamics; supercomputing and research in theoretical chemistry; monte carlo simulations of light nuclei; parallel processing; and scientists of the future: learning by doing.

  8. Supercomputers to transform Science

    CERN Multimedia

    2006-01-01

    "New insights into the structure of space and time, climate modeling, and the design of novel drugs, are but a few of the many research areas that will be transforned by the installation of three supercomputers at the Unversity of Bristol." (1/2 page)

  9. Petaflop supercomputers of China

    Institute of Scientific and Technical Information of China (English)

    Guoliang CHEN

    2010-01-01

    @@ After ten years of development, high performance computing (HPC) in China has made remarkable progress. In November, 2010, the NUDT Tianhe-1A and the Dawning Nebulae respectively claimed the 1st and 3rd places in the Top500 Supercomputers List; this recognizes internationally the level that China has achieved in high performance computer manufacturing.

  10. Introduction to Reconfigurable Supercomputing

    CERN Document Server

    Lanzagorta, Marco; Rosenberg, Robert

    2010-01-01

    This book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPe who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigur

  11. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  12. Ultrascalable petaflop parallel supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A. (Ridgefield, CT); Chen, Dong (Croton On Hudson, NY); Chiu, George (Cross River, NY); Cipolla, Thomas M. (Katonah, NY); Coteus, Paul W. (Yorktown Heights, NY); Gara, Alan G. (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Hall, Shawn (Pleasantville, NY); Haring, Rudolf A. (Cortlandt Manor, NY); Heidelberger, Philip (Cortlandt Manor, NY); Kopcsay, Gerard V. (Yorktown Heights, NY); Ohmacht, Martin (Yorktown Heights, NY); Salapura, Valentina (Chappaqua, NY); Sugavanam, Krishnan (Mahopac, NY); Takken, Todd (Brewster, NY)

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  13. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  14. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  15. The ASCI Network for SC '99: A Step on the Path to a 100 Gigabit Per Second Supercomputing Network

    Energy Technology Data Exchange (ETDEWEB)

    PRATT,THOMAS J.; TARMAN,THOMAS D.; MARTINEZ,LUIS M.; MILLER,MARC M.; ADAMS,ROGER L.; CHEN,HELEN Y.; BRANDT,JAMES M.; WYCKOFF,PETER S.

    2000-07-24

    This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within the community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.

  16. Microprocessors: from desktops to supercomputers.

    Science.gov (United States)

    Baskett, F; Hennessy, J L

    1993-08-13

    Continuing improvements in integrated circuit technology and computer architecture have driven microprocessors to performance levels that rival those of supercomputers-at a fraction of the price. The use of sophisticated memory hierarchies enables microprocessor-based machines to have very large memories built from commodity dynamic random access memory while retaining the high bandwidth and low access time needed in a high-performance machine. Parallel processors composed of these high-performance microprocessors are becoming the supercomputing technology of choice for scientific and engineering applications. The challenges for these new supercomputers have been in developing multiprocessor architectures that are easy to program and that deliver high performance without extraordinary programming efforts by users. Recent progress in multiprocessor architecture has led to ways to meet these challenges.

  17. Argonne National Lab deploys Force10 networks' massively dense ethernet switch for supercomputing cluster

    CERN Multimedia

    2003-01-01

    "Force10 Networks, Inc. today announced that Argonne National Laboratory (Argonne, IL) has successfully deployed Force10 E-Series switch/routers to connect to the TeraGrid, the world's largest supercomputing grid, sponsored by the National Science Foundation (NSF)" (1/2 page).

  18. Improved Access to Supercomputers Boosts Chemical Applications.

    Science.gov (United States)

    Borman, Stu

    1989-01-01

    Supercomputing is described in terms of computing power and abilities. The increase in availability of supercomputers for use in chemical calculations and modeling are reported. Efforts of the National Science Foundation and Cray Research are highlighted. (CW)

  19. Desktop supercomputers. Advance medical imaging.

    Science.gov (United States)

    Frisiello, R S

    1991-02-01

    Medical imaging tools that radiologists as well as a wide range of clinicians and healthcare professionals have come to depend upon are emerging into the next phase of functionality. The strides being made in supercomputing technologies--including reduction of size and price--are pushing medical imaging to a new level of accuracy and functionality.

  20. An assessment of worldwide supercomputer usage

    Energy Technology Data Exchange (ETDEWEB)

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  1. First International Conference on Lysophospholipids and Related Bioactive Lipids in Biology and Disease Sponsored by the Federation of American Societies of Experimental Biology

    Directory of Open Access Journals (Sweden)

    Edward J. Goetzl

    2001-01-01

    Full Text Available The First International Conference on “Lysophospholipids and Related Bioactive Lipids in Biology and Diseases” was held in Tucson, AZ on June 10�14, 2001, under the sponsorship of the Federation of American Societies of Experimental Biology (FASEB. More than 100 scientists from 11 countries discussed the recent results of basic and clinical research in the broad biology of this emerging field. Immense progress was reported in defining the biochemistry of generation and biology of cellular effects of the bioactive lysophospholipids (LPLs. These aspects of LPLs described at the conference parallel in many ways those of the eicosanoid mediators, such as prostaglandins and leukotrienes. As for eicosanoids, the LPLs termed lysophosphatidic acid (LPA and sphingosine 1-phosphate (S1P are produced enzymatically from phospholipid precursors in cell membranes and act on cells at nanomolar concentrations through subfamilies of receptors of the G protein–coupled superfamily. The rate-limiting steps in production of LPLs were reported to be controlled by specific phospholipases for LPA and sphingosine kinases for S1P. The receptor subfamilies formerly were designated endothelial differentiation gene-encoded receptors or Edg Rs for their original discovery in endothelial cells. A currently active nomenclature committee at this conference suggested the ligand-based names: S1P1 = Edg-1, S1P2 = Edg-5, S1P3 = Edg-3, S1P4 = Edg-6, and S1P5 = Edg-8; LPA1 = Edg-2, LPA2 = Edg-4, and LPA3 = Edg-7 receptors. Several families of lysophospholipid phosphatases (LPPs have been characterized, which biodegrade LPA, whereas S1P is inactivated with similar rapidity by both a lyase and S1P phosphatases.

  2. Comparing Clusters and Supercomputers for Lattice QCD

    CERN Document Server

    Gottlieb, S

    2001-01-01

    Since the development of the Beowulf project to build a parallel computer from commodity PC components, there have been many such clusters built. The MILC QCD code has been run on a variety of clusters and supercomputers. Key design features are identified, and the cost effectiveness of clusters and supercomputers are compared.

  3. Low Cost Supercomputer for Applications in Physics

    Science.gov (United States)

    Ahmed, Maqsood; Ahmed, Rashid; Saeed, M. Alam; Rashid, Haris; Fazal-e-Aleem

    2007-02-01

    Using parallel processing technique and commodity hardware, Beowulf supercomputers can be built at a much lower cost. Research organizations and educational institutions are using this technique to build their own high performance clusters. In this paper we discuss the architecture and design of Beowulf supercomputer and our own experience of building BURRAQ cluster.

  4. Blogs & Sponsored Articles

    OpenAIRE

    Sabat, Fabien

    2012-01-01

    I wrote this thesis to deepen my knowledge about the sponsored articles' market. Indeed, I'm working for one year in a company which is selling sponsored articles on blogs to announcers, it's why I decided to focus on the blogosphere to understand how its rising influence allowed the apparition of the sponsored articles' market. Moreover, I tried to propose solutions to improve the performance of my company on this market. In this paper, I describe what a blog is, how is it possible to classi...

  5. TOP500 Supercomputers for November 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  6. 16 million [pounds] investment for 'virtual supercomputer'

    CERN Multimedia

    Holland, C

    2003-01-01

    "The Particle Physics and Astronomy Research Council is to spend 16million [pounds] to create a massive computing Grid, equivalent to the world's second largest supercomputer after Japan's Earth Simulator computer" (1/2 page)

  7. Supercomputers open window of opportunity for nursing.

    Science.gov (United States)

    Meintz, S L

    1993-01-01

    A window of opportunity was opened for nurse researchers with the High Performance Computing and Communications (HPCC) initiative in President Bush's 1992 fiscal-year budget. Nursing research moved into the high-performance computing environment through the University of Nevada Las Vegas/Cray Project for Nursing and Health Data Research (PNHDR). USing the CRAY YMP 2/216 supercomputer, the PNHDR established the validity of a supercomputer platform for nursing research. In addition, the research has identified a paradigm shift in statistical analysis, delineated actual and potential barriers to nursing research in a supercomputing environment, conceptualized a new branch of nursing science called Nurmetrics, and discovered new avenue for nursing research utilizing supercomputing tools.

  8. Misleading Performance Reporting in the Supercomputing Field

    Directory of Open Access Journals (Sweden)

    David H. Bailey

    1992-01-01

    Full Text Available In a previous humorous note, I outlined 12 ways in which performance figures for scientific supercomputers can be distorted. In this paper, the problem of potentially misleading performance reporting is discussed in detail. Included are some examples that have appeared in recent published scientific papers. This paper also includes some proposed guidelines for reporting performance, the adoption of which would raise the level of professionalism and reduce the level of confusion in the field of supercomputing.

  9. Simulating Galactic Winds on Supercomputers

    Science.gov (United States)

    Schneider, Evan

    2017-01-01

    Galactic winds are a ubiquitous feature of rapidly star-forming galaxies. Observations of nearby galaxies have shown that winds are complex, multiphase phenomena, comprised of outflowing gas at a large range of densities, temperatures, and velocities. Describing how starburst-driven outflows originate, evolve, and affect the circumgalactic medium and gas supply of galaxies is an important challenge for theories of galaxy evolution. In this talk, I will discuss how we are using a new hydrodynamics code, Cholla, to improve our understanding of galactic winds. Cholla is a massively parallel, GPU-based code that takes advantage of specialized hardware on the newest generation of supercomputers. With Cholla, we can perform large, three-dimensional simulations of multiphase outflows, allowing us to track the coupling of mass and momentum between gas phases across hundreds of parsecs at sub-parsec resolution. The results of our recent simulations demonstrate that the evolution of cool gas in galactic winds is highly dependent on the initial structure of embedded clouds. In particular, we find that turbulent density structures lead to more efficient mass transfer from cool to hot phases of the wind. I will discuss the implications of our results both for the incorporation of winds into cosmological simulations, and for interpretations of observed multiphase winds and the circumgalatic medium of nearby galaxies.

  10. GREEN SUPERCOMPUTING IN A DESKTOP BOX

    Energy Technology Data Exchange (ETDEWEB)

    HSU, CHUNG-HSING [Los Alamos National Laboratory; FENG, WU-CHUN [NON LANL; CHING, AVERY [NON LANL

    2007-01-17

    The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicated computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.

  11. A training program for scientific supercomputing users

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  12. TOP500 Supercomputers for June 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  13. TOP500 Supercomputers for June 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  14. TOP500 Supercomputers for November 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-11-15

    20th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 20th edition of the TOP500 list of the world's fastest supercomputers was released today (November 15, 2002). The Earth Simulator supercomputer installed earlier this year at the Earth Simulator Center in Yokohama, Japan, is with its Linpack benchmark performance of 35.86 Tflop/s (trillions of calculations per second) retains the number one position. The No.2 and No.3 positions are held by two new, identical ASCI Q systems at Los Alamos National Laboratory (7.73Tflop/s each). These systems are built by Hewlett-Packard and based on the Alpha Server SC computer system.

  15. Input/output behavior of supercomputing applications

    Science.gov (United States)

    Miller, Ethan L.

    1991-01-01

    The collection and analysis of supercomputer I/O traces and their use in a collection of buffering and caching simulations are described. This serves two purposes. First, it gives a model of how individual applications running on supercomputers request file system I/O, allowing system designer to optimize I/O hardware and file system algorithms to that model. Second, the buffering simulations show what resources are needed to maximize the CPU utilization of a supercomputer given a very bursty I/O request rate. By using read-ahead and write-behind in a large solid stated disk, one or two applications were sufficient to fully utilize a Cray Y-MP CPU.

  16. GPUs: An Oasis in the Supercomputing Desert

    CERN Document Server

    Kamleh, Waseem

    2012-01-01

    A novel metric is introduced to compare the supercomputing resources available to academic researchers on a national basis. Data from the supercomputing Top 500 and the top 500 universities in the Academic Ranking of World Universities (ARWU) are combined to form the proposed "500/500" score for a given country. Australia scores poorly in the 500/500 metric when compared with other countries with a similar ARWU ranking, an indication that HPC-based researchers in Australia are at a relative disadvantage with respect to their overseas competitors. For HPC problems where single precision is sufficient, commodity GPUs provide a cost-effective means of quenching the computational thirst of otherwise parched Lattice practitioners traversing the Australian supercomputing desert. We explore some of the more difficult terrain in single precision territory, finding that BiCGStab is unreliable in single precision at large lattice sizes. We test the CGNE and CGNR forms of the conjugate gradient method on the normal equa...

  17. Floating point arithmetic in future supercomputers

    Science.gov (United States)

    Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.

    1989-01-01

    Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.

  18. Sponsoring effektiv und effizient gestalten

    OpenAIRE

    Schwizer, Dominik; Reinecke, Sven

    2017-01-01

    Sponsoring ist ein wichtiges Marketing-Instrument. Sein Wertschöpfungsbeitrag ist jedoch nicht auf den ersten Blick sichtbar. Ein professionelles Sponsoring-Controlling hilft, die Wirksamkeit und Wirtschaftlichkeit von Sponsoring-Engagements messbar zu machen. Je nach Phase eines Sponsorships sind dabei unterschiedliche Aspekte zu beachten.

  19. Sponsors and exhibitors

    Science.gov (United States)

    Xiao, Guoqing; Cai, Xiaohong; Ding, Dajun; Ma, Xinwen; Zhao, Yongtao

    2014-04-01

    CAS Chinese Academy of Sciences IMP Institute of Modern Physics, Chinese Academy of Sciences NNSFC National Natural Science Foundation of China IUPAP International Union of Pure and Applied Physics (IUPAP) IOP Journal of Physics B: Atomic, Molecular and Optical Physics and Journal of Physics: Conference Series

  20. Committees and sponsors

    Science.gov (United States)

    2011-10-01

    International Advisory Committee Richard F CastenYale, USA Luiz Carlos ChamonSão Paulo, Brazil Osvaldo CivitareseLa Plata, Argentina Jozsef CsehATOMKI, Hungary Jerry P DraayerLSU, USA Alfredo Galindo-UribarriORNL & UT, USA James J KolataNotre Dame, USA Jorge López UTEP, USA Joseph B NatowitzTexas A & M, USA Ma Esther Ortiz IF-UNAM Stuart PittelDelaware, USA Andrés SandovalIF-UNAM Adam SzczepaniakIndiana, USA Piet Van IsackerGANIL, France Michael WiescherNotre Dame, USA Organizing Committee Libertad Barrón-Palos (Chair)IF-UNAM Roelof BijkerICN-UNAM Ruben FossionICN-UNAM David LizcanoININ Sponsors Instituto de Ciencias Nucleares, UNAMInstituto de Física, UNAMInstituto Nacional de Investigaciones NuclearesDivisión de Física Nuclear de la SMFCentro Latinoamericano de Física

  1. Jointly Sponsored Research Program

    Energy Technology Data Exchange (ETDEWEB)

    Everett A. Sondreal; John G. Hendrikson; Thomas A. Erickson

    2009-03-31

    U.S. Department of Energy (DOE) Cooperative Agreement DE-FC26-98FT40321 funded through the Office of Fossil Energy and administered at the National Energy Technology Laboratory (NETL) supported the performance of a Jointly Sponsored Research Program (JSRP) at the Energy & Environmental Research Center (EERC) with a minimum 50% nonfederal cost share to assist industry in commercializing and effectively applying highly efficient, nonpolluting energy systems that meet the nation's requirements for clean fuels, chemicals, and electricity in the 21st century. The EERC in partnership with its nonfederal partners jointly performed 131 JSRP projects for which the total DOE cost share was $22,716,634 (38%) and the nonfederal share was $36,776,573 (62%). Summaries of these projects are presented in this report for six program areas: (1) resource characterization and waste management, (2) air quality assessment and control, (3) advanced power systems, (4) advanced fuel forms, (5) value-added coproducts, and (6) advanced materials. The work performed under this agreement addressed DOE goals for reductions in CO{sub 2} emissions through efficiency, capture, and sequestration; near-zero emissions from highly efficient coal-fired power plants; environmental control capabilities for SO{sub 2}, NO{sub x}, fine respirable particulate (PM{sub 2.5}), and mercury; alternative transportation fuels including liquid synfuels and hydrogen; and synergistic integration of fossil and renewable resources.

  2. Conference on Non-linear Phenomena in Mathematical Physics: Dedicated to Cathleen Synge Morawetz on her 85th Birthday. The Fields Institute, Toronto, Canada September 18-20, 2008. Sponsors: Association for Women in Mathematics, Inc. and The Fields Institute

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, Jennifer

    2012-10-15

    This scientific meeting focused on the legacy of Cathleen S. Morawetz and the impact that her scientific work on transonic flow and the non-linear wave equation has had in recent progress on different aspects of analysis for non-linear wave, kinetic and quantum transport problems associated to mathematical physics. These are areas where the elements of continuum, statistical and stochastic mechanics, and their interplay, have counterparts in the theory of existence, uniqueness and stability of the associated systems of equations and geometric constraints. It was a central event for the applied and computational analysis community focusing on Partial Differential Equations. The goal of the proposal was to honor Cathleen Morawetz, a highly successful woman in mathematics, while encouraging beginning researchers. The conference was successful in show casing the work of successful women, enhancing the visibility of women in the profession and providing role models for those just beginning their careers. The two-day conference included seven 45-minute lectures and one day of six 45-minute lectures, and a poster session for junior participants. The conference program included 19 distinguished speakers, 10 poster presentations, about 70 junior and senior participants and, of course, the participation of Cathleen Synge Morawetz. The conference celebrated Morawetz's paramount contributions to the theory of non-linear equations in gas dynamics and their impact in the current trends of nonlinear phenomena in mathematical physics, but also served as an awareness session of current women's contribution to mathematics.

  3. Education and Immigrant Integration in the United States and Canada. Proceedings of a Conference sponsored by the Division of United States Studies and the Canada Institute, Woodrow Wilson International Center for Scholars, and The Migration Policy Institute (April 25, 2005)

    Science.gov (United States)

    Strum, Philippa, Ed.; Biette, David, Ed.

    2005-01-01

    The Conference proceedings include an Introduction by Demetrios Papademetriou. Two panels presented speakers as follows: Panel I: Elementary and Secondary (K-12) Education: (1) Immigrant Integration and "Bilingual" Education (Alec Ian Gershberg); (2) Absent Policies: Canadian Strategies for the Education and Integration of Immigrant…

  4. Classification Schedules as Subject Enhancement in Online Catalogs. A Review of a Conference Sponsored by Forest Press, the OCLC Online Computer Library Center, and the Council on Library Resources.

    Science.gov (United States)

    Mandel, Carol A.

    This paper presents a synthesis of the ideas and issues developed at a conference convened to review the results of the Dewey Decimal Classification Online Project and explore the potential for future use of the Dewey Decimal Classification (DDC) and Library of Congress Classification (LCC) schedules in online library catalogs. Conference…

  5. Energy Conferences and Symposia; (USA)

    Energy Technology Data Exchange (ETDEWEB)

    Osborne, J.H.; Simpson, W.F. Jr. (eds.)

    1991-01-01

    Energy Conferences and Symposia, a monthly publication, was instituted to keep scientists, engineers, managers, and related energy professionals abreast of meetings sponsored by the Department of Energy (DOE) and by other technical associations. Announcements cover conference, symposia, workshops, congresses, and other formal meetings pertaining to DOE programmatic interests. Complete meeting information, including title, sponsor, and contact, is presented in the main section, which is arranged alphabetically by subject area. Within a subject, citations are sorted by beginning data of the meeting. New listings are indicated by a bullet after the conference number and DOE-sponsored conferences are indicated by a star. Two indexes are provided for cross referencing conference information. The Chronological Index lists conference titles by dates and gives the subject area where complete information they may be found. The Location Index is alphabetically sorted by the city where the conference will be held.

  6. Data-intensive computing on numerically-insensitive supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Ahrens, James P [Los Alamos National Laboratory; Fasel, Patricia K [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Heitmann, Katrin [Los Alamos National Laboratory; Lo, Li - Ta [Los Alamos National Laboratory; Patchett, John M [Los Alamos National Laboratory; Williams, Sean J [Los Alamos National Laboratory; Woodring, Jonathan L [Los Alamos National Laboratory; Wu, Joshua [Los Alamos National Laboratory; Hsu, Chung - Hsing [ONL

    2010-12-03

    With the advent of the era of petascale supercomputing, via the delivery of the Roadrunner supercomputing platform at Los Alamos National Laboratory, there is a pressing need to address the problem of visualizing massive petascale-sized results. In this presentation, I discuss progress on a number of approaches including in-situ analysis, multi-resolution out-of-core streaming and interactive rendering on the supercomputing platform. These approaches are placed in context by the emerging area of data-intensive supercomputing.

  7. Parallel supercomputers for lattice gauge theory.

    Science.gov (United States)

    Brown, F R; Christ, N H

    1988-03-18

    During the past 10 years, particle physicists have increasingly employed numerical simulation to answer fundamental theoretical questions about the properties of quarks and gluons. The enormous computer resources required by quantum chromodynamic calculations have inspired the design and construction of very powerful, highly parallel, dedicated computers optimized for this work. This article gives a brief description of the numerical structure and current status of these large-scale lattice gauge theory calculations, with emphasis on the computational demands they make. The architecture, present state, and potential of these special-purpose supercomputers is described. It is argued that a numerical solution of low energy quantum chromodynamics may well be achieved by these machines.

  8. Building, using, and maximizing the impact of concept inventories in the biological sciences: report on a National Science Foundation sponsored conference on the construction of concept inventories in the biological sciences.

    Science.gov (United States)

    Garvin-Doxas, Kathy; Klymkowsky, Michael; Elrod, Susan

    2007-01-01

    The meeting "Conceptual Assessment in the Biological Sciences" was held March 3-4, 2007, in Boulder, Colorado. Sponsored by the National Science Foundation and hosted by University of Colorado, Boulder's Biology Concept Inventory Team, the meeting drew together 21 participants from 13 institutions, all of whom had received National Science Foundation funding for biology education. Topics of interest included Introductory Biology, Genetics, Evolution, Ecology, and the Nature of Science. The goal of the meeting was to organize and leverage current efforts to develop concept inventories for each of these topics. These diagnostic tools are inspired by the success of the Force Concept Inventory, developed by the community of physics educators to identify student misconceptions about Newtonian mechanics. By working together, participants hope to lessen the risk that groups might develop competing rather than complementary inventories.

  9. PREFACE: HITES 2012: 'Horizons of Innovative Theories, Experiments, and Supercomputing in Nuclear Physics'

    Science.gov (United States)

    Hecht, K. T.

    2012-12-01

    This volume contains the contributions of the speakers of an international conference in honor of Jerry Draayer's 70th birthday, entitled 'Horizons of Innovative Theories, Experiments and Supercomputing in Nuclear Physics'. The list of contributors includes not only international experts in these fields, but also many former collaborators, former graduate students, and former postdoctoral fellows of Jerry Draayer, stressing innovative theories such as special symmetries and supercomputing, both of particular interest to Jerry. The organizers of the conference intended to honor Jerry Draayer not only for his seminal contributions in these fields, but also for his administrative skills at departmental, university, national and international level. Signed: Ted Hecht University of Michigan Conference photograph Scientific Advisory Committee Ani AprahamianUniversity of Notre Dame Baha BalantekinUniversity of Wisconsin Bruce BarrettUniversity of Arizona Umit CatalyurekOhio State Unversity David DeanOak Ridge National Laboratory Jutta Escher (Chair)Lawrence Livermore National Laboratory Jorge HirschUNAM, Mexico David RoweUniversity of Toronto Brad Sherill & Michigan State University Joel TohlineLouisiana State University Edward ZganjarLousiana State University Organizing Committee Jeff BlackmonLouisiana State University Mark CaprioUniversity of Notre Dame Tomas DytrychLouisiana State University Ana GeorgievaINRNE, Bulgaria Kristina Launey (Co-chair)Louisiana State University Gabriella PopaOhio University Zanesville James Vary (Co-chair)Iowa State University Local Organizing Committee Laura LinhardtLouisiana State University Charlie RascoLouisiana State University Karen Richard (Coordinator)Louisiana State University

  10. Supercomputing Centers and Electricity Service Providers

    DEFF Research Database (Denmark)

    Patki, Tapasya; Bates, Natalie; Ghatikar, Girish

    2016-01-01

    Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates this pro......Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates...... from a detailed, quantitative survey-based analysis and compare the perspectives of the European grid and SCs to the ones of the United States (US). We then show that contrary to the expectation, SCs in the US are more open toward cooperating and developing demand-management strategies with their ESPs...... (LRZ). We conclude that perspectives on demand management are dependent on the electricity market and pricing in the geographical region and on the degree of control that a particular SC has in terms of power-purchase negotiation....

  11. Multi-petascale highly efficient parallel supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O' Brien, John K.; O' Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  12. A workbench for tera-flop supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Resch, M.M.; Kuester, U.; Mueller, M.S.; Lang, U. [High Performance Computing Center Stuttgart (HLRS), Stuttgart (Germany)

    2003-07-01

    Supercomputers currently reach a peak performance in the range of TFlop/s. With but one exception - the Japanese Earth Simulator - none of these systems has so far been able to also show a level of sustained performance for a variety of applications that comes close to the peak performance. Sustained TFlop/s are therefore rarely seen. The reasons are manifold and are well known: Bandwidth and latency both for main memory and for the internal network are the key internal technical problems. Cache hierarchies with large caches can bring relief but are no remedy to the problem. However, there are not only technical problems that inhibit the full exploitation by scientists of the potential of modern supercomputers. More and more organizational issues come to the forefront. This paper shows the approach of the High Performance Computing Center Stuttgart (HLRS) to deliver a sustained performance of TFlop/s for a wide range of applications from a large group of users spread over Germany. The core of the concept is the role of the data. Around this we design a simulation workbench that hides the complexity of interacting computers, networks and file systems from the user. (authors)

  13. Seismic signal processing on heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas

    2015-04-01

    The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that

  14. Most Social Scientists Shun Free Use of Supercomputers.

    Science.gov (United States)

    Kiernan, Vincent

    1998-01-01

    Social scientists, who frequently complain that the federal government spends too little on them, are passing up what scholars in the physical and natural sciences see as the government's best give-aways: free access to supercomputers. Some social scientists say the supercomputers are difficult to use; others find desktop computers provide…

  15. Supercomputing - Use Cases, Advances, The Future (2/2)

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the second day, we will focus on software and software paradigms driving supercomputers, workloads that need supercomputing treatment, advances in technology and possible future developments. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and i...

  16. Will Your Next Supercomputer Come from Costco?

    Energy Technology Data Exchange (ETDEWEB)

    Farber, Rob

    2007-04-15

    A fun topic for April, one that is not an April fool’s joke, is that you can purchase a commodity 200+ Gflop (single-precision) Linux supercomputer for around $600 from your favorite electronic vendor. Yes, it’s true. Just walk in and ask for a Sony Playstation 3 (PS3), take it home and install Linux on it. IBM has provided an excellent tutorial for installing Linux and building applications at http://www-128.ibm.com/developerworks/power/library/pa-linuxps3-1. If you want to raise some eyebrows at work, then submit a purchase request for a Sony PS3 game console and watch the reactions as your paperwork wends its way through the procurement process.

  17. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  18. Multiprocessing on supercomputers for computational aerodynamics

    Science.gov (United States)

    Yarrow, Maurice; Mehta, Unmeel B.

    1991-01-01

    Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.

  19. The PMS project Poor Man's Supercomputer

    CERN Document Server

    Csikor, Ferenc; Hegedüs, P; Horváth, V K; Katz, S D; Piróth, A

    2001-01-01

    We briefly describe the Poor Man's Supercomputer (PMS) project that is carried out at Eotvos University, Budapest. The goal is to develop a cost effective, scalable, fast parallel computer to perform numerical calculations of physical problems that can be implemented on a lattice with nearest neighbour interactions. To reach this goal we developed the PMS architecture using PC components and designed a special, low cost communication hardware and the driver software for Linux OS. Our first implementation of the PMS includes 32 nodes (PMS1). The performance of the PMS1 was tested by Lattice Gauge Theory simulations. Using SU(3) pure gauge theory or bosonic MSSM on the PMS1 computer we obtained 3$/Mflops price-per-sustained performance ratio. The design of the special hardware and the communication driver are freely available upon request for non-profit organizations.

  20. The BlueGene/L Supercomputer

    CERN Document Server

    Bhanot, G V; Gara, A; Vranas, P M; Bhanot, Gyan; Chen, Dong; Gara, Alan; Vranas, Pavlos

    2002-01-01

    The architecture of the BlueGene/L massively parallel supercomputer is described. Each computing node consists of a single compute ASIC plus 256 MB of external memory. The compute ASIC integrates two 700 MHz PowerPC 440 integer CPU cores, two 2.8 Gflops floating point units, 4 MB of embedded DRAM as cache, a memory controller for external memory, six 1.4 Gbit/s bi-directional ports for a 3-dimensional torus network connection, three 2.8 Gbit/s bi-directional ports for connecting to a global tree network and a Gigabit Ethernet for I/O. 65,536 of such nodes are connected into a 3-d torus with a geometry of 32x32x64. The total peak performance of the system is 360 Teraflops and the total amount of memory is 16 TeraBytes.

  1. Employer-sponsored pension plans

    Directory of Open Access Journals (Sweden)

    Rakonjac-Antić Tatjana N.

    2004-01-01

    Full Text Available Apart from pension plans within social insurance, in developed pension systems there are also available to individuals schemes which may to a large extent ensure a significant part of their total pension. Among them are the following: employer-sponsored pension plans or individual pension plans. The most widely used employer-sponsored pension plan in the USA is 401(k, in which both the employer and the employee contribute to the financing of the pension. These contributions as well as the return to their investment have a preferential tax treatment, i.e. do not enter a tax base. The funds are taxed only when drawn from the account in the form of a pension. This paper aims to present the functioning of 401(k pension plan as the most widely used employer sponsored pension plan in the USA, which is likely, in a modified form, to have an important place within our future reformed pension insurance system.

  2. 45 CFR 1226.12 - Sponsor employees.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Sponsor employees. 1226.12 Section 1226.12 Public... PROHIBITIONS ON ELECTORAL AND LOBBYING ACTIVITIES Sponsor Employee Activities § 1226.12 Sponsor employees. Sponsor employees whose salaries or other compensation are paid, in whole or in part, with agency...

  3. Foundry provides the network backbone for supercomputing

    CERN Multimedia

    2003-01-01

    Some of the results from the fourth annual High-Performance Bandwidth Challenge, held in conjunction with SC2003, the international conference on high-performance computing and networking which occurred last week in Phoenix, AZ (1/2 page).

  4. World's biggest 'virtual supercomputer' given the go-ahead

    CERN Multimedia

    2003-01-01

    "The Particle Physics and Astronomy Research Council has today announced GBP 16 million to create a massive computing Grid, equivalent to the world's second largest supercomputer after Japan's Earth Simulator computer" (1 page).

  5. Bibliography of publications related to Nevada-sponsored research of the proposed Yucca Mountain high-level radioactive waste repository site through 1994

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M.

    1994-12-01

    Since 1985, the State of Nevada has sponsored academic/private sector research into various health, safety, and environmental issues identified with the Yucca Mountain site. This research has been documented in scientific peer-reviewed literature, conferences, and workshops, as well as numerous state-sponsored University thesis and dissertation programs. This document is a bibliography of the scientific articles, manuscripts, theses, dissertations, conference symposium abstracts, and meeting presentations produced as a result of state-sponsored research.

  6. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  7. Taking ASCI supercomputing to the end game.

    Energy Technology Data Exchange (ETDEWEB)

    DeBenedictis, Erik P.

    2004-03-01

    The ASCI supercomputing program is broadly defined as running physics simulations on progressively more powerful digital computers. What happens if we extrapolate the computer technology to its end? We have developed a model for key ASCI computations running on a hypothetical computer whose technology is parameterized in ways that account for advancing technology. This model includes technology information such as Moore's Law for transistor scaling and developments in cooling technology. The model also includes limits imposed by laws of physics, such as thermodynamic limits on power dissipation, limits on cooling, and the limitation of signal propagation velocity to the speed of light. We apply this model and show that ASCI computations will advance smoothly for another 10-20 years to an 'end game' defined by thermodynamic limits and the speed of light. Performance levels at the end game will vary greatly by specific problem, but will be in the Exaflops to Zetaflops range for currently anticipated problems. We have also found an architecture that would be within a constant factor of giving optimal performance at the end game. This architecture is an evolutionary derivative of the mesh-connected microprocessor (such as ASCI Red Storm or IBM Blue Gene/L). We provide designs for the necessary enhancement to microprocessor functionality and the power-efficiency of both the processor and memory system. The technology we develop in the foregoing provides a 'perfect' computer model with which we can rate the quality of realizable computer designs, both in this writing and as a way of designing future computers. This report focuses on classical computers based on irreversible digital logic, and more specifically on algorithms that simulate space computing, irreversible logic, analog computers, and other ways to address stockpile stewardship that are outside the scope of this report.

  8. Simulating functional magnetic materials on supercomputers.

    Science.gov (United States)

    Gruner, Markus Ernst; Entel, Peter

    2009-07-22

    The recent passing of the petaflop per second landmark by the Roadrunner project at the Los Alamos National Laboratory marks a preliminary peak of an impressive world-wide development in the high-performance scientific computing sector. Also, purely academic state-of-the-art supercomputers such as the IBM Blue Gene/P at Forschungszentrum Jülich allow us nowadays to investigate large systems of the order of 10(3) spin polarized transition metal atoms by means of density functional theory. Three applications will be presented where large-scale ab initio calculations contribute to the understanding of key properties emerging from a close interrelation between structure and magnetism. The first two examples discuss the size dependent evolution of equilibrium structural motifs in elementary iron and binary Fe-Pt and Co-Pt transition metal nanoparticles, which are currently discussed as promising candidates for ultra-high-density magnetic data storage media. However, the preference for multiply twinned morphologies at smaller cluster sizes counteracts the formation of a single-crystalline L1(0) phase, which alone provides the required hard magnetic properties. The third application is concerned with the magnetic shape memory effect in the Ni-Mn-Ga Heusler alloy, which is a technologically relevant candidate for magnetomechanical actuators and sensors. In this material strains of up to 10% can be induced by external magnetic fields due to the field induced shifting of martensitic twin boundaries, requiring an extremely high mobility of the martensitic twin boundaries, but also the selection of the appropriate martensitic structure from the rich phase diagram.

  9. Supercomputing - Use Cases, Advances, The Future (1/2)

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the first day, we will focus on the history and theory of supercomputing, the top500 list and the hardware that makes supercomputers tick. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP an...

  10. US green building conference - 1994

    Energy Technology Data Exchange (ETDEWEB)

    Fanney, A.H.; Whitter, K.M.; Traugott, A.E.; Simon, L.N. [eds.

    1994-12-31

    This report constitutes the proceedings of the Green Building Conference held in Gaithersburg, Maryland, February 16-17, 1994. The conference was co-sponsored by the National Institute of Standards and Technology (NIST) and the US Green Building Council (USGBC). Over 450 individuals attended the conference representing building product manufacturers, building owners and managers, environmental groups, utilities, contractors, builders, architects, engineers, and the local, state, and the federal governments. The conference provided an opportunity to acquire practical, useful information on green buildings, resources, and guidelines. Eighteen papers were presented at the conference. Separate abstracts and indexing were prepared for each paper for inclusion in the Energy Science and Technology Database.

  11. Template for preparation of papers for IEEE sponsored conferences & symposia.

    Science.gov (United States)

    Sacchi, L; Dagliati, A; Tibollo, V; Leporati, P; De Cata, P; Cerra, C; Chiovato, L; Bellazzi, R

    2015-01-01

    To improve the access to medical information is necessary to design and implement integrated informatics techniques aimed to gather data from different and heterogeneous sources. This paper describes the technologies used to integrate data coming from the electronic medical record of the IRCCS Fondazione Maugeri (FSM) hospital of Pavia, Italy, and combines them with administrative, pharmacy drugs purchase coming from the local healthcare agency (ASL) of the Pavia area and environmental open data of the same region. The integration process is focused on data coming from a cohort of one thousand patients diagnosed with Type 2 Diabetes Mellitus (T2DM). Data analysis and temporal data mining techniques have been integrated to enhance the initial dataset allowing the possibility to stratify patients using further information coming from the mined data like behavioral patterns of prescription-related drug purchases and other frequent clinical temporal patterns, through the use of an intuitive dashboard controlled system.

  12. An integrated distributed processing interface for supercomputers and workstations

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, J.; McGavran, L.

    1989-01-01

    Access to documentation, communication between multiple processes running on heterogeneous computers, and animation of simulations of engineering problems are typically weak in most supercomputer environments. This presentation will describe how we are improving this situation in the Computer Research and Applications group at Los Alamos National Laboratory. We have developed a tool using UNIX filters and a SunView interface that allows users simple access to documentation via mouse driven menus. We have also developed a distributed application that integrated a two point boundary value problem on one of our Cray Supercomputers. It is controlled and displayed graphically by a window interface running on a workstation screen. Our motivation for this research has been to improve the usual typewriter/static interface using language independent controls to show capabilities of the workstation/supercomputer combination. 8 refs.

  13. Teaching of Psychology Conference: Ideas & Innovations. Proceedings from the Annual Conference (24th, Tarrytown, NY, March 19-20, 2010)

    Science.gov (United States)

    Howell-Carter, Marya, Ed.; Gonder, Jennifer, Ed.

    2010-01-01

    Conference proceedings of the 24th Annual Conference on the Teaching of Psychology: Ideas and Innovations, sponsored by the Psychology Department of the State University of New York at Farmingdale. The conference theme for 2010 was Fostering, Assessing, and Sustaining Student Engagement. The conference featured two keynote addresses from prominent…

  14. Latin American Conference on Agricultural Education

    Science.gov (United States)

    Agan, Ray

    1971-01-01

    Presents the subject matter of a UNESCO sponsored conference in Pamplona, Colombia, April 26- May 23, 1970 of school directors and Ministry officials in Agricultural Education from 12 Latin American Countries. (GB)

  15. Recent results from the Swinburne supercomputer software correlator

    Science.gov (United States)

    Tingay, Steven; et al.

    I will descrcibe the development of software correlators on the Swinburne Beowulf supercomputer and recent work using the Cray XD-1 machine. I will also describe recent Australian and global VLBI experiments that have been processed on the Swinburne software correlator, along with imaging results from these data. The role of the software correlator in Australia's eVLBI project will be discussed.

  16. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    Science.gov (United States)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  17. Access to Supercomputers. Higher Education Panel Report 69.

    Science.gov (United States)

    Holmstrom, Engin Inel

    This survey was conducted to provide the National Science Foundation with baseline information on current computer use in the nation's major research universities, including the actual and potential use of supercomputers. Questionnaires were sent to 207 doctorate-granting institutions; after follow-ups, 167 institutions (91% of the institutions…

  18. The Sky's the Limit When Super Students Meet Supercomputers.

    Science.gov (United States)

    Trotter, Andrew

    1991-01-01

    In a few select high schools in the U.S., supercomputers are allowing talented students to attempt sophisticated research projects using simultaneous simulations of nature, culture, and technology not achievable by ordinary microcomputers. Schools can get their students online by entering contests and seeking grants and partnerships with…

  19. INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Maeno, T [Brookhaven National Laboratory (BNL); Mashinistov, R. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Nilsson, P [Brookhaven National Laboratory (BNL); Novikov, A. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Poyda, A. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Ryabinkin, E. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Teslyuk, A. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Tsulaia, V. [Lawrence Berkeley National Laboratory (LBNL); Velikhov, V. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Wen, G. [University of Wisconsin, Madison; Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a variety of

  20. Integration of Panda Workload Management System with supercomputers

    Science.gov (United States)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  1. Sponsoring-Controlling: Empirische Ergebnisse offenbaren Defizite

    OpenAIRE

    Hohenauer, Robert; Fischer, Peter Mathias; Reinecke, Sven

    2013-01-01

    Was kann Sponsoring wirklich zum langfristigen Unternehmenserfolg beitragen? Wie lässt sich die Wirtschaftlichkeit des Kommunikationstools belegen? Die Antwort auf diese Fragen ist nur durch ein professionell durchgeführtes Sponsoring-Controlling möglich. Die folgenden exemplarischen Ergebnisse der Studie Sponsoring-Controlling des Instituts für Marketing an der Universität St.Gallen (HSG) liefern Erkenntnisse über die Stellung der Erfolgsmessung des Kommunikationstools im Unternehmen.

  2. Sponsors' participation in conduct and reporting of industry trials

    DEFF Research Database (Denmark)

    Lundh, Andreas; Krogsbøll, Lasse T; Gøtzsche, Peter C

    2012-01-01

    Bias in industry-sponsored trials is common and the interpretation of the results can be particularly distorted in favour of the sponsor's product. We investigated sponsors' involvement in the conduct and reporting of industry-sponsored trials.......Bias in industry-sponsored trials is common and the interpretation of the results can be particularly distorted in favour of the sponsor's product. We investigated sponsors' involvement in the conduct and reporting of industry-sponsored trials....

  3. 4th International Cryocoolers Conference

    CERN Document Server

    Patton, George; Knox, Margaret

    1987-01-01

    The Cryocoolers 4 proceedings archives the contributions of leading international experts at the 4th International Cryocooler Conference that was held in Easton, Maryland on September 25-26, 1986. About 170 people attended the conference representing 11 countries, 14 universities, 21 government laboratories and 60 industrial companies. Thirty-one papers were presented describing advancements and applications of cryocoolers in the temperature range below 80K. This year's conference was sponsored by the David Taylor Naval Ship Research and Development Center of Annapolis, Maryland, and the conference proceedings reproduced here was published by them.

  4. Conference Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Riveros, P. A.; Dutrizac, J. E. [Natural Resources Canada, Ottawa, ON (Canada)] [eds.

    2001-07-01

    This workshop is part of a continuing series of joint workshops organized by CANMET of Natural Resources Canada and the Research Directorate-General of the European Commission in the areas of sustainable metallurgical processing, recycling and environmental protection. The program presented at this conference also benefited from the organizational support of the Canadian Association of Recycling Industries. Over the past twenty years these workshops served as a valuable forum for the discussion of the technological issues associated with metallurgical processing, recycling and compliance with environmental regulations within the framework of sustainable development. The program this year was organized in five sessions. A total of 32 papers were presented. Session One emphasized the international dimension of modern research as illustrated by the Intelligent Manufacturing System (MIS) program. Session Two dealt with recycling, with special attention to the recycling of plastics and construction materials. Session Three was devoted to highlighting European efforts to treat chromium-bearing solutions or to find alternatives to chromium salts in surface treatment operations. Session Four emphasized primary and secondary zinc processing and the importance of energy conservation. The final session reviewed waste management practices and the utilization of waste materials. Opening addresses by representatives of the sponsoring organizations and a list of conference attendees and their affiliations are also included.

  5. NATO Conference

    CERN Document Server

    Lynn, W

    1975-01-01

    The contents of this volume involve selection, emendation and up-dating of papers presented at the NATO Conference "Mathe­ matical Analysis of Decision problems in Ecology" in Istanbul, Turkey, July 9-13, 1973. It was sponsored by the System Sciences Division of NATO directed by Dr. B. Bayraktar with local arrange­ ments administered by Dr. Ilhami Karayalcin, professor of the Department of Industrial Engineering at the Technical University of Istanbul. It was organized by A. Charnes, University professor across the University of Texas System, and Walter R.Lynn, Di­ rector of the School of Civil and Environmental Engineering at Cornell Unjversity. The objective of the conference was to bring together a group of leading researchers from the major sciences involved in eco­ logical problems and to present the current state of progress in research of a mathematical nature which might assist in the solu­ tion of these problems. Although their presentations are not herein recorded, the key­ note address of Dr....

  6. SPONSORING, BRAND VALUE AND SOCIAL MEDIA

    Directory of Open Access Journals (Sweden)

    Alexander Zauner

    2012-10-01

    Full Text Available The increasing involvement of individuals in social media over the past decade has enabled firms to pursue new avenues in communication and sponsoring activities. Besides general research on either social media or sponsoring, questions regarding the consequences of a joint activity (sponsoring activities in social media remain unexplored. Hence, the present study analyses whether the perceived image of the brand and the celebrity endorser credibility of a top sports team influence the perceived brand value of the sponsoring firm in a social media setting. Moreover, these effects are compared between existing customers and non-customers of the sponsoring firm. Interestingly, perceived celebrity endorser credibility plays no role in forming brand value perceptions in the case of the existing customers. Implications for marketing theory and practice are derived.

  7. Paradigma Baru Sponsor sebagai Mitra Penyelenggaraan Event

    Directory of Open Access Journals (Sweden)

    Lidya Wati Evelina

    2011-09-01

    Full Text Available The goals of this article is to know why sponsorship only to be viewed as sources of funding to implement the public relation events or marketing communication events. Method used in this article is qualitative method research to be based on observation, library study and content analysis. The result of this research seems that change happened in role of sponsorship from only fund source to become a partner of cooperation (mutual symbiotism between sponsor and event organizer. This article exploring the change of sponsorship concept from only looking for sponsor as an activity of fund mobilitation to become a partnership cooperation between event organizer and sponsor company. The mean of sponsor itself beside the fund supporter, they are also a side who takes mutual benefit from the cooperation. Conclusion, any close relationship (mutual benefit between two sides who take cooperation in event implementation (sponsor and event organiser.

  8. Paradigma Baru Sponsor sebagai Mitra Penyelenggaraan Event

    Directory of Open Access Journals (Sweden)

    Lidia Evelina

    2011-10-01

    Full Text Available The goals of this article is to know why sponsorship only to be viewed as sources of funding to implement the public relation events or marketing communication events. Method used in this article is qualitative method research to be based on observation, library study and content analysis. The result of this research seems that change happened in role of sponsorship from only fund source to become a partner of cooperation (mutual symbiotism between sponsor and event organizer. This article exploring the change of sponsorship concept from only looking for sponsor as an activity of fund mobilitation to become a partnership cooperation between event organizer and sponsor company. The mean of sponsor itself beside the fund supporter, they are also a side who takes mutual benefit from the cooperation. Conclusion, any close relationship (mutual benefit between two sides who take cooperation in event implementation (sponsor and event organiser. 

  9. Sandia`s network for Supercomputing `94: Linking the Los Alamos, Lawrence Livermore, and Sandia National Laboratories using switched multimegabit data service

    Energy Technology Data Exchange (ETDEWEB)

    Vahle, M.O.; Gossage, S.A.; Brenkosh, J.P. [Sandia National Labs., Albuquerque, NM (United States). Advanced Networking Integration Dept.

    1995-01-01

    Supercomputing `94, a high-performance computing and communications conference, was held November 14th through 18th, 1994 in Washington DC. For the past four years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1994 conference, Sandia built a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second linking its private SMDS network between its facilities in Albuquerque, New Mexico and Livermore, California to the convention center in Washington, D.C. For the show, the network was also extended from Sandia, New Mexico to Los Alamos National Laboratory and from Sandia, California to Lawrence Livermore National Laboratory. This paper documents and describes this network and how it was used at the conference.

  10. Applications of parallel supercomputers: Scientific results and computer science lessons

    Energy Technology Data Exchange (ETDEWEB)

    Fox, G.C.

    1989-07-12

    Parallel Computing has come of age with several commercial and inhouse systems that deliver supercomputer performance. We illustrate this with several major computations completed or underway at Caltech on hypercubes, transputer arrays and the SIMD Connection Machine CM-2 and AMT DAP. Applications covered are lattice gauge theory, computational fluid dynamics, subatomic string dynamics, statistical and condensed matter physics,theoretical and experimental astronomy, quantum chemistry, plasma physics, grain dynamics, computer chess, graphics ray tracing, and Kalman filters. We use these applications to compare the performance of several advanced architecture computers including the conventional CRAY and ETA-10 supercomputers. We describe which problems are suitable for which computers in the terms of a matching between problem and computer architecture. This is part of a set of lessons we draw for hardware, software, and performance. We speculate on the emergence of new academic disciplines motivated by the growing importance of computers. 138 refs., 23 figs., 10 tabs.

  11. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  12. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  13. Supercomputers ready for use as discovery machines for neuroscience

    OpenAIRE

    Kunkel, Susanne; Schmidt, Maximilian; Helias, Moritz; Eppler, Jochen Martin; Igarashi, Jun; Masumoto, Gen; Fukai, Tomoki; Ishii, Shin; Plesser, Hans Ekkehard; Morrison, Abigail; Diesmann, Markus

    2013-01-01

    NEST is a widely used tool to simulate biological spiking neural networks [1]. The simulator is subject to continuous development, which is driven by the requirements of the current neuroscientific questions. At present, a major part of the software development focuses on the improvement of the simulator's fundamental data structures in order to enable brain-scale simulations on supercomputers such as the Blue Gene system in Jülich and the K computer in Kobe. Based on our memory-u...

  14. Scientists turn to supercomputers for knowledge about universe

    CERN Multimedia

    White, G

    2003-01-01

    The DOE is funding the computers at the Center for Astrophysical Thermonuclear Flashes which is based at the University of Chicago and uses supercomputers at the nation's weapons labs to study explosions in and on certain stars. The DOE is picking up the project's bill in the hope that the work will help the agency learn to better simulate the blasts of nuclear warheads (1 page).

  15. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  16. Study of ATLAS TRT performance with GRID and supercomputers

    Science.gov (United States)

    Krasnopevtsev, D. V.; Klimentov, A. A.; Mashinistov, R. Yu.; Belyaev, N. L.; Ryabinkin, E. A.

    2016-09-01

    One of the most important studies dedicated to be solved for ATLAS physical analysis is a reconstruction of proton-proton events with large number of interactions in Transition Radiation Tracker. Paper includes Transition Radiation Tracker performance results obtained with the usage of the ATLAS GRID and Kurchatov Institute's Data Processing Center including Tier-1 grid site and supercomputer as well as analysis of CPU efficiency during these studies.

  17. From Thread to Transcontinental Computer: Disturbing Lessons in Distributed Supercomputing

    CERN Document Server

    Groen, Derek

    2015-01-01

    We describe the political and technical complications encountered during the astronomical CosmoGrid project. CosmoGrid is a numerical study on the formation of large scale structure in the universe. The simulations are challenging due to the enormous dynamic range in spatial and temporal coordinates, as well as the enormous computer resources required. In CosmoGrid we dealt with the computational requirements by connecting up to four supercomputers via an optical network and make them operate as a single machine. This was challenging, if only for the fact that the supercomputers of our choice are separated by half the planet, as three of them are located scattered across Europe and fourth one is in Tokyo. The co-scheduling of multiple computers and the 'gridification' of the code enabled us to achieve an efficiency of up to $93\\%$ for this distributed intercontinental supercomputer. In this work, we find that high-performance computing on a grid can be done much more effectively if the sites involved are will...

  18. Proceedings of the first energy research power supercomputer users symposium

    Energy Technology Data Exchange (ETDEWEB)

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University.

  19. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy; Katz, Daniel S.; Binkowski, T. Andrew; Zhong, Xiaoliang; Heinonen, Olle; Karpeyev, Dmitry; Wilde, Michael

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt's sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.

  20. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  1. Joint US/German Conference

    CERN Document Server

    Gulledge, Thomas; Jones, Albert

    1993-01-01

    This proceedings volume contains selected and refereed contributions that were presented at the conference on "Recent Developments and New Perspectives of Operations Research in the Area of Production Planning and Control" in Hagen/Germany, 25. - 26. June 1992. This conference was organized with the cooperation of the FernuniversiHit Hagen and was jointly hosted by the "Deutsche Gesellschaft fur Operations Research (DGOR)" and the "Manufacturing Special Interest Group of the Operations Research Society of America (ORSA-SIGMA)". For the organization of the conference we received generous financial support from the sponsors listed at the end of this volume. We wish to express our appreciation to all supporters for their contributions. This conference was the successor of the JOInt ORSA/DGOR-conference in Gaithersburg/Maryland, USA, on the 30. and 31. July 1991. Both OR-societies committed themselves in 1989 to host joint conferences on special topics of interest from the field of operations research. This goal ...

  2. CIEE 1993 annual conference: Program

    Energy Technology Data Exchange (ETDEWEB)

    1993-08-01

    The California Institute for Energy efficiency`s third annual conference highlights the results of CIEE-sponsored multiyear research in three programs: Building Energy Efficiency, Air Quality Impacts of Energy Efficiency, and End-Use Resource Planning. Results from scoping studies, Director`s discretionary research, and exploratory research are also featured.

  3. Who benefits from firm-sponsored training?

    OpenAIRE

    Dostie, Benoit

    2015-01-01

    Workers participating in firm-sponsored training receive higher wages as a result. But given that firms pay the majority of costs for training, shouldn't they also benefit? Empirical evidence shows that this is in fact the case. Firm-sponsored training leads to higher productivity levels and increased innovation, both of which benefit the firm. Training can also be complementary to, and enhance, other types of firm investment, particularly in physical capital, such as information and communic...

  4. A "Position Paradox" in Sponsored Search Auctions

    OpenAIRE

    Kinshuk Jerath; Liye Ma; Young-Hoon Park; Kannan Srinivasan

    2011-01-01

    We study the bidding strategies of vertically differentiated firms that bid for sponsored search advertisement positions for a keyword at a search engine. We explicitly model how consumers navigate and click on sponsored links based on their knowledge and beliefs about firm qualities. Our model yields several interesting insights; a main counterintuitive result we focus on is the "position paradox." The paradox is that a superior firm may bid lower than an inferior firm and obtain a position ...

  5. 4th International Conference on Advanced Robotics

    CERN Document Server

    1989-01-01

    The Fourth International Conference on Advanced Robotics was held in Columbus, Ohio, U. S. A. on June 13th to 15th, 1989. The first two conferences in this series were held in Tokyo. The third was held in Versailles, France in October 1987. The International Conference on Advanced Robotics is affiliated with the International Federation of Robotics. This conference was sponsored by The Ohio State University. The American Society of Mechanical Engineers was a cooperating co-sponsor. The objective of the International Conference on Advanced Robotics is to provide an international exchange of information on the topic of advanced robotics. This was adopted as one of the themes for international research cooperation at a meeting of representatives of seven industrialized countries held in Williamsburg, U. S. A. in May 1983. The present conference is truly international in character with contributions from authors of twelve countries. (Bulgaria, Canada, France, Great Britain, India, Italy, Japan, Peoples Republic o...

  6. Conference Report: 5th Annual Georgia Conference on Information Literacy

    Directory of Open Access Journals (Sweden)

    Rebecca Ziegler

    2009-11-01

    Full Text Available The 5th annual Georgia Conference on Information Literacy took place in Savannah, Georgia on October 3-4, 2008. Since its inception, this conference has drawn participants from across the United States and even a few from abroad. Jointly sponsored by the Zach S. Henderson Library, the Department of Writing and Linguistics, the College of Education, and the Center for Continuing Education at Georgia Southern University, the conference offers both theoretical and practical discussions of the complex issues involved in teaching students how to find, interpret and use information in emerging electronic technologies against the backdrop of one of America’s loveliest cities.

  7. Spectators' perceptions of official sponsors in the FIFA 2010 World ...

    African Journals Online (AJOL)

    Spectators' perceptions of official sponsors in the FIFA 2010 World Cup TM and purchase intentions of sponsors products or brands. ... and interest, personal liking of the event, attitude towards sponsors, status of event and perceived sincerity.

  8. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    Energy Technology Data Exchange (ETDEWEB)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S [Earth Sciences Department. Barcelona Supercomputing Center. Barcelona (Spain); Cuevas, E [Izanaa Atmospheric Research Center. Agencia Estatal de Meteorologia, Tenerife (Spain); Nickovic, S [Atmospheric Research and Environment Branch, World Meteorological Organization, Geneva (Switzerland)], E-mail: carlos.perez@bsc.es

    2009-03-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  9. Programming Environment for a High-Performance Parallel Supercomputer with Intelligent Communication

    OpenAIRE

    A. Gunzinger; BÄumle, B.; Frey, M.; Klebl, M.; Kocheisen, M.; Kohler, P.; Morel, R.; Müller, U; Rosenthal, M

    1996-01-01

    At the Electronics Laboratory of the Swiss Federal Institute of Technology (ETH) in Zürich, the high-performance parallel supercomputer MUSIC (MUlti processor System with Intelligent Communication) has been developed. As applications like neural network simulation and molecular dynamics show, the Electronics Laboratory supercomputer is absolutely on par with those of conventional supercomputers, but electric power requirements are reduced by a factor of 1,000, weight is reduced by a factor of...

  10. 6th International Cryocoolers Conference

    CERN Document Server

    Knox, Margaret

    1991-01-01

    Cryocoolers 6 archives developments and performance measurements in the field of cryocoolers based on the contributions of leading international experts at the 6th International Cryocooler Conference that was held in Plymouth, Massachusetts, on October 25-26, 1990. This year's conference consisted of 54 papers and was sponsored by the David Taylor Naval Ship Research and Development Center of Annapolis, Maryland. The conference proceedings containing 49 submitted manuscripts was published by the David Taylor Naval Ship Research and Development Center in the report reproduced here.

  11. Numerical simulations of astrophysical problems on massively parallel supercomputers

    Science.gov (United States)

    Kulikov, Igor; Chernykh, Igor; Glinsky, Boris

    2016-10-01

    In this paper, we propose the last version of the numerical model for simulation of astrophysical objects dynamics, and a new realization of our AstroPhi code for Intel Xeon Phi based RSC PetaStream supercomputers. The co-design of a computational model for the description of astrophysical objects is described. The parallel implementation and scalability tests of the AstroPhi code are presented. We achieve a 73% weak scaling efficiency with using of 256x Intel Xeon Phi accelerators with 61440 threads.

  12. AENEAS A Custom-built Parallel Supercomputer for Quantum Gravity

    CERN Document Server

    Hamber, H W

    1998-01-01

    Accurate Quantum Gravity calculations, based on the simplicial lattice formulation, are computationally very demanding and require vast amounts of computer resources. A custom-made 64-node parallel supercomputer capable of performing up to $2 \\times 10^{10}$ floating point operations per second has been assembled entirely out of commodity components, and has been operational for the last ten months. It will allow the numerical computation of a variety of quantities of physical interest in quantum gravity and related field theories, including the estimate of the critical exponents in the vicinity of the ultraviolet fixed point to an accuracy of a few percent.

  13. 2014 China International Friendship Cities Conference and Guangzhou International Urban Innovation Conference

    Institute of Scientific and Technical Information of China (English)

    Liu; Yan

    2015-01-01

    The 2014 China International Friendship Cities Conference and the Guangzhou International Urban Innovation Conference co-sponsored by the CPAFFC,China International Friendship Cities Association(CIFCA),and the Guangzhou Municipal People’s Government,and hosted by the latter’s Foreign Affairs Office were held in the Guangzhou Baiyun International Convention Center last November 28-29.

  14. 78 FR 17866 - New Animal Drug Approvals; Change of Sponsor; Change of Sponsor's Drug Labeler Code; Gonadorelin...

    Science.gov (United States)

    2013-03-25

    ... Approvals; Change of Sponsor; Change of Sponsor's Drug Labeler Code; Gonadorelin Acetate; Isoflurane; Praziquantel; Propofol; Sevoflurane; Triamcinolone Acetonide AGENCY: Food and Drug Administration, HHS....

  15. The Political Economy of Federally Sponsored Data

    Directory of Open Access Journals (Sweden)

    Bart Ragon

    2013-11-01

    Full Text Available Librarian involvement in the Open Access (OA movement has traditionally focused on access to scholarly publications. Recent actions by the White House have focused attention on access on the data produced from federally sponsored research. Questions have emerged concerning access to the output of federally sponsored research and whether it is a public or private good. Understanding the political battle over access to federally funded research is closely tied to the ownership of the peer review process in higher education and associated revenue streams, and as a result, interest groups seeking to influence government regulation have politicized the issues. As a major funder of research in higher education, policies from the federal government are likely to drive change in research practices at higher education institutions and impact library services. The political economy of federally sponsored research data will shape research enterprises in higher education inspire a number of new services distributed throughout the research life cycle.

  16. 78 FR 27859 - New Animal Drugs; Change of Sponsor's Name and Address; Change of Sponsor

    Science.gov (United States)

    2013-05-13

    ... sponsor for a new animal drug application (NADA) from Land O'Lakes Purina Feed LLC to Purina Nutrition LLC... HUMAN SERVICES Food and Drug Administration 21 CFR Parts 510 and 558 New Animal Drugs; Change of Sponsor.... SUMMARY: The Food and Drug Administration (FDA) is amending the animal drug regulations to reflect...

  17. A special purpose silicon compiler for designing supercomputing VLSI systems

    Science.gov (United States)

    Venkateswaran, N.; Murugavel, P.; Kamakoti, V.; Shankarraman, M. J.; Rangarajan, S.; Mallikarjun, M.; Karthikeyan, B.; Prabhakar, T. S.; Satish, V.; Venkatasubramaniam, P. R.

    1991-01-01

    Design of general/special purpose supercomputing VLSI systems for numeric algorithm execution involves tackling two important aspects, namely their computational and communication complexities. Development of software tools for designing such systems itself becomes complex. Hence a novel design methodology has to be developed. For designing such complex systems a special purpose silicon compiler is needed in which: the computational and communicational structures of different numeric algorithms should be taken into account to simplify the silicon compiler design, the approach is macrocell based, and the software tools at different levels (algorithm down to the VLSI circuit layout) should get integrated. In this paper a special purpose silicon (SPS) compiler based on PACUBE macrocell VLSI arrays for designing supercomputing VLSI systems is presented. It is shown that turn-around time and silicon real estate get reduced over the silicon compilers based on PLA's, SLA's, and gate arrays. The first two silicon compiler characteristics mentioned above enable the SPS compiler to perform systolic mapping (at the macrocell level) of algorithms whose computational structures are of GIPOP (generalized inner product outer product) form. Direct systolic mapping on PLA's, SLA's, and gate arrays is very difficult as they are micro-cell based. A novel GIPOP processor is under development using this special purpose silicon compiler.

  18. Solidification in a Supercomputer: From Crystal Nuclei to Dendrite Assemblages

    Science.gov (United States)

    Shibuta, Yasushi; Ohno, Munekazu; Takaki, Tomohiro

    2015-08-01

    Thanks to the recent progress in high-performance computational environments, the range of applications of computational metallurgy is expanding rapidly. In this paper, cutting-edge simulations of solidification from atomic to microstructural levels performed on a graphics processing unit (GPU) architecture are introduced with a brief introduction to advances in computational studies on solidification. In particular, million-atom molecular dynamics simulations captured the spontaneous evolution of anisotropy in a solid nucleus in an undercooled melt and homogeneous nucleation without any inducing factor, which is followed by grain growth. At the microstructural level, the quantitative phase-field model has been gaining importance as a powerful tool for predicting solidification microstructures. In this paper, the convergence behavior of simulation results obtained with this model is discussed, in detail. Such convergence ensures the reliability of results of phase-field simulations. Using the quantitative phase-field model, the competitive growth of dendrite assemblages during the directional solidification of a binary alloy bicrystal at the millimeter scale is examined by performing two- and three-dimensional large-scale simulations by multi-GPU computation on the supercomputer, TSUBAME2.5. This cutting-edge approach using a GPU supercomputer is opening a new phase in computational metallurgy.

  19. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [ORNL; New, Joshua Ryan [ORNL; Edwards, Richard [ORNL; Parker, Lynne Edwards [ORNL

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.

  20. Optimizing Linpack Benchmark on GPU-Accelerated Petascale Supercomputer

    Institute of Scientific and Technical Information of China (English)

    Feng Wang; Can-Qun Yang; Yun-Fei Du; Juan Chen; Hui-Zhan Yi; Wei-Xia Xu

    2011-01-01

    In this paper we present the programming of the Linpack benchmark on TianHe-1 system,the first petascale supercomputer system of China,and the largest GPU-accelerated heterogeneous system ever attempted before.A hybrid programming model consisting of MPI,OpenMP and streaming computing is described to explore the task parallel,thread parallel and data parallel of the Linpack.We explain how we optimized the load distribution across the CPUs and GPUs using the two-level adaptive method and describe the implementation in details.To overcome the low-bandwidth between the CPU and GPU communication,we present a software pipelining technique to hide the communication overhead.Combined with other traditional optimizations,the Linpack we developed achieved 196.7 GFLOPS on a single compute element of TianHe-1.This result is 70.1% of the peak compute capability,3.3 times faster than the result by using the vendor's library.On the full configuration of TianHe-1 our optimizations resulted in a Linpack performance of 0.563 PFLOPS,which made TianHe-1 the 5th fastest supercomputer on the Top500 list in November,2009.

  1. 77 FR 26697 - New Animal Drugs; Change of Sponsor; Change of Sponsor Address; Change of Sponsor Name and...

    Science.gov (United States)

    2012-05-07

    ... that it has transferred ownership of, and all rights and interest in, abbreviated new animal drug... HUMAN SERVICES Food and Drug Administration 21 CFR Parts 510 and 522 New Animal Drugs; Change of Sponsor... Administration, HHS. ACTION: Final rule. SUMMARY: The Food and Drug Administration (FDA) is amending the...

  2. Conference on Transportation and Urban Life

    CERN Document Server

    Wenzel, H

    1976-01-01

    All the papers in this volume were presented at a conference on Transportation and Urban Life, held in Munich during the third week of September, 1975. The conference was sponsored by the Special Programme Panels on Systems Science and Human Factors of the Science Committee of the North Atlantic Treaty Organisation. The distinguishing characteristic of the conference and of this volume lies in the combination of systems science and human factors contributions in the field of urban transportation. The initiative for attempting such a synthesis came from the sponsors. It is increasingly realised that the complexity of contemporary problems which applied scientists are being asked to solve is such that the coordinated efforts of several disciplines are needed to solve them. The brief which we formulated for the conference and distribu­ ted in our international call for papers was as follows: "The conference is intended to highlight significant psycho­ logical, SOCiological and economic aspects of transportatio...

  3. International Conference on Continental Volcanism-IAVCEI 2006

    Institute of Scientific and Technical Information of China (English)

    Yigang Xu; Martin A Menzies

    2006-01-01

    @@ The International Conference on Continental Volcanism, sponsored by the International Association of Volcanology and Chemistry of the Earth's Interior (IAVCEI), was held at White Swan Hotel, Guangzhou, China, May 14th to 18th, 2006.

  4. China-Hungary Friendship City Conference Held in Budapest

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    <正>Jointly sponsored by the CPAFFC and the Ministry of Local Government and Regional Development of Hungary, the China-Hungary Friendship City Conference was held in Budapest from February 18 to 21.

  5. 7 CFR 226.16 - Sponsoring organization provisions.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false Sponsoring organization provisions. 226.16 Section 226... § 226.16 Sponsoring organization provisions. (a) Each sponsoring organization shall comply with all provisions of § 226.15. (b) Each sponsoring organization must submit to the State agency with its application...

  6. The State-Sponsored Student Entrepreneur

    Science.gov (United States)

    Mars, Matthew M.; Slaughter, Sheila; Rhoades, Gary

    2008-01-01

    This paper introduces the emergent role of the state-sponsored student entrepreneur within the academic capitalist knowledge/learning regime. Drawing on two clarifying cases of such entrepreneurship, the study explores the shifting boundaries between public and private sectors, the creation of new circuits of knowledge, and the entrepreneurial…

  7. Cheerleading. A Handbook for Teacher-Sponsors.

    Science.gov (United States)

    Chicago Board of Education, IL.

    The high school cheerleader has the opportunity to set the stage for sportsmanship, school spirit, and the mood of the athletic program. A carefully planned program is essential to leading cheerleading sections into feelings of good fellowship and interschool rapport. This handbook is designed to assist teacher-sponsors and administrators in the…

  8. RESPONSIBILITIES IN RESEARCH: THE ROLES OF SPONSORS

    Directory of Open Access Journals (Sweden)

    Roberto Cañete VILLAFRANCA

    2010-12-01

    Full Text Available The principle of responsibility assumed special status in the contemporary scenery besides de emergence of the conception related to necessity of public control of the scientific practice. The aim of this paper was to reflect about the roles carried out by sponsors from different institutions, national and international organizations, and pharmaceutical industries when moral conflicts emerge in this context. The complexity in the daily practice of researches involving human subjects points out the obligation to enlarge the panorama of the discussion on this theme, including several social and institutional partners. In this specific case became necessary to know the potential sponsors, exceeding the question concerning to the pharmaceutical industry. The most frequent sponsors listed in the literature were: the pharmaceutical industry, specially related to international research for development of new drugs, vaccines and medical products and equipments; international organizations in the case of epidemiological studies, drugs and vaccines for neglected diseases and research strengthening capabilities; national organizations working with specific health problems of the countries or regions populations; and research and academic institutions that have their own policies for research. This paper assumes the position that an ethical conduct during the preparation, development and dissemination of a research protocol results need to be shared between different partners. However, the sponsors have a crucial role to maintain the integrity of researches, to guarantee the protection of volunteers and society at large.

  9. Market Imperfections and Firm-Sponsored Training

    NARCIS (Netherlands)

    Picchio, M.; van Ours, J.C.

    2010-01-01

    Recent human capital theories predict that labor market frictions and product market competition influence firm-sponsored training. Using matched worker-firm data from Dutch manufacturing, our paper empirically assesses the validity of these predictions. We find that a decrease in labor market frict

  10. Market Imperfections and Firm-Sponsored Training

    NARCIS (Netherlands)

    Picchio, M.; van Ours, J.C.

    2010-01-01

    Recent human capital theories predict that labor market frictions and product market competition influence firm-sponsored training. Using matched worker-firm data from Dutch manufacturing, our paper empirically assesses the validity of these predictions. We find that a decrease in labor market frict

  11. Brain Storming at the Annual Conference of CCOIC

    Institute of Scientific and Technical Information of China (English)

    Rose Yan

    2010-01-01

    @@ On December 18, Annual Conference of CCOIC, sponsored by CCPIT was successfully held in Beijing. The theme of this conference was "to drive the globalization of the enterprises & improve their international competitiveness . Abou 600 people including high-level government officials, famou economists and experts as well as entrepreneurs attended the an nual conference. Business wisdom sparked the conference. And the following is some inspiring and splendid thought from those intelligent.

  12. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  13. Final report on the Copper Mountain conference on multigrid methods

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-10-01

    The Copper Mountain Conference on Multigrid Methods was held on April 6-11, 1997. It took the same format used in the previous Copper Mountain Conferences on Multigrid Method conferences. Over 87 mathematicians from all over the world attended the meeting. 56 half-hour talks on current research topics were presented. Talks with similar content were organized into sessions. Session topics included: fluids; domain decomposition; iterative methods; basics; adaptive methods; non-linear filtering; CFD; applications; transport; algebraic solvers; supercomputing; and student paper winners.

  14. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    Barreiro Megino, Fernando Harald; The ATLAS collaboration

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS ex- periment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercom- puter at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA Pilot framework for ...

  15. Lectures in Supercomputational Neurosciences Dynamics in Complex Brain Networks

    CERN Document Server

    Graben, Peter beim; Thiel, Marco; Kurths, Jürgen

    2008-01-01

    Computational Neuroscience is a burgeoning field of research where only the combined effort of neuroscientists, biologists, psychologists, physicists, mathematicians, computer scientists, engineers and other specialists, e.g. from linguistics and medicine, seem to be able to expand the limits of our knowledge. The present volume is an introduction, largely from the physicists' perspective, to the subject matter with in-depth contributions by system neuroscientists. A conceptual model for complex networks of neurons is introduced that incorporates many important features of the real brain, such as various types of neurons, various brain areas, inhibitory and excitatory coupling and the plasticity of the network. The computational implementation on supercomputers, which is introduced and discussed in detail in this book, will enable the readers to modify and adapt the algortihm for their own research. Worked-out examples of applications are presented for networks of Morris-Lecar neurons to model the cortical co...

  16. Modeling the weather with a data flow supercomputer

    Science.gov (United States)

    Dennis, J. B.; Gao, G.-R.; Todd, K. W.

    1984-01-01

    A static concept of data flow architecture is considered for a supercomputer for weather modeling. The machine level instructions are loaded into specific memory locations before computation is initiated, with only one instruction active at a time. The machine would have processing element, functional unit, array memory, memory routing and distribution routing network elements all contained on microprocessors. A value-oriented algorithmic language (VAL) would be employed and would have, as basic operations, simple functions deriving results from operand values. Details of the machine language format, computations with an array and file processing procedures are outlined. A global weather model is discussed in terms of a static architecture and the potential computation rate is analyzed. The results indicate that detailed design studies are warranted to quantify costs and parts fabrication requirements.

  17. Toward the Graphics Turing Scale on a Blue Gene Supercomputer

    CERN Document Server

    McGuigan, Michael

    2008-01-01

    We investigate raytracing performance that can be achieved on a class of Blue Gene supercomputers. We measure a 822 times speedup over a Pentium IV on a 6144 processor Blue Gene/L. We measure the computational performance as a function of number of processors and problem size to determine the scaling performance of the raytracing calculation on the Blue Gene. We find nontrivial scaling behavior at large number of processors. We discuss applications of this technology to scientific visualization with advanced lighting and high resolution. We utilize three racks of a Blue Gene/L in our calculations which is less than three percent of the the capacity of the worlds largest Blue Gene computer.

  18. Direct numerical simulation of turbulence using GPU accelerated supercomputers

    Science.gov (United States)

    Khajeh-Saeed, Ali; Blair Perot, J.

    2013-02-01

    Direct numerical simulations of turbulence are optimized for up to 192 graphics processors. The results from two large GPU clusters are compared to the performance of corresponding CPU clusters. A number of important algorithm changes are necessary to access the full computational power of graphics processors and these adaptations are discussed. It is shown that the handling of subdomain communication becomes even more critical when using GPU based supercomputers. The potential for overlap of MPI communication with GPU computation is analyzed and then optimized. Detailed timings reveal that the internal calculations are now so efficient that the operations related to MPI communication are the primary scaling bottleneck at all but the very largest problem sizes that can fit on the hardware. This work gives a glimpse of the CFD performance issues will dominate many hardware platform in the near future.

  19. Internal computational fluid mechanics on supercomputers for aerospace propulsion systems

    Science.gov (United States)

    Andersen, Bernhard H.; Benson, Thomas J.

    1987-01-01

    The accurate calculation of three-dimensional internal flowfields for application towards aerospace propulsion systems requires computational resources available only on supercomputers. A survey is presented of three-dimensional calculations of hypersonic, transonic, and subsonic internal flowfields conducted at the Lewis Research Center. A steady state Parabolized Navier-Stokes (PNS) solution of flow in a Mach 5.0, mixed compression inlet, a Navier-Stokes solution of flow in the vicinity of a terminal shock, and a PNS solution of flow in a diffusing S-bend with vortex generators are presented and discussed. All of these calculations were performed on either the NAS Cray-2 or the Lewis Research Center Cray XMP.

  20. Refinement of herpesvirus B-capsid structure on parallel supercomputers.

    Science.gov (United States)

    Zhou, Z H; Chiu, W; Haskell, K; Spears, H; Jakana, J; Rixon, F J; Scott, L R

    1998-01-01

    Electron cryomicroscopy and icosahedral reconstruction are used to obtain the three-dimensional structure of the 1250-A-diameter herpesvirus B-capsid. The centers and orientations of particles in focal pairs of 400-kV, spot-scan micrographs are determined and iteratively refined by common-lines-based local and global refinement procedures. We describe the rationale behind choosing shared-memory multiprocessor computers for executing the global refinement, which is the most computationally intensive step in the reconstruction procedure. This refinement has been implemented on three different shared-memory supercomputers. The speedup and efficiency are evaluated by using test data sets with different numbers of particles and processors. Using this parallel refinement program, we refine the herpesvirus B-capsid from 355-particle images to 13-A resolution. The map shows new structural features and interactions of the protein subunits in the three distinct morphological units: penton, hexon, and triplex of this T = 16 icosahedral particle.

  1. Solving global shallow water equations on heterogeneous supercomputers.

    Science.gov (United States)

    Fu, Haohuan; Gan, Lin; Yang, Chao; Xue, Wei; Wang, Lanning; Wang, Xinliang; Huang, Xiaomeng; Yang, Guangwen

    2017-01-01

    The scientific demand for more accurate modeling of the climate system calls for more computing power to support higher resolutions, inclusion of more component models, more complicated physics schemes, and larger ensembles. As the recent improvements in computing power mostly come from the increasing number of nodes in a system and the integration of heterogeneous accelerators, how to scale the computing problems onto more nodes and various kinds of accelerators has become a challenge for the model development. This paper describes our efforts on developing a highly scalable framework for performing global atmospheric modeling on heterogeneous supercomputers equipped with various accelerators, such as GPU (Graphic Processing Unit), MIC (Many Integrated Core), and FPGA (Field Programmable Gate Arrays) cards. We propose a generalized partition scheme of the problem domain, so as to keep a balanced utilization of both CPU resources and accelerator resources. With optimizations on both computing and memory access patterns, we manage to achieve around 8 to 20 times speedup when comparing one hybrid GPU or MIC node with one CPU node with 12 cores. Using a customized FPGA-based data-flow engines, we see the potential to gain another 5 to 8 times improvement on performance. On heterogeneous supercomputers, such as Tianhe-1A and Tianhe-2, our framework is capable of achieving ideally linear scaling efficiency, and sustained double-precision performances of 581 Tflops on Tianhe-1A (using 3750 nodes) and 3.74 Pflops on Tianhe-2 (using 8644 nodes). Our study also provides an evaluation on the programming paradigm of various accelerator architectures (GPU, MIC, FPGA) for performing global atmospheric simulation, to form a picture about both the potential performance benefits and the programming efforts involved.

  2. Virtualizing Super-Computation On-Board Uas

    Science.gov (United States)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  3. Effects of Disclosing Sponsored Content in Blogs

    Science.gov (United States)

    van Reijmersdal, Eva A.; Fransen, Marieke L.; van Noort, Guda; Opree, Suzanna J.; Vandeberg, Lisa; Reusch, Sanne; van Lieshout, Floor; Boerman, Sophie C.

    2016-01-01

    This article presents two studies examining the effects of disclosing online native advertising (i.e., sponsored content in blogs) on people’s brand attitude and purchase intentions. To investigate the mechanisms underlying these effects, we integrated resistance theories with the persuasion knowledge model. We theorize that disclosures activate people’s persuasion knowledge, which in turn evokes resistance strategies that people use to cope with the persuasion attempt made in the blog. We tested our predications with two experiments (N = 118 and N = 134). We found that participants indeed activated persuasion knowledge in response to disclosures, after which they used both cognitive (counterarguing) and affective (negative affect) resistance strategies to decrease persuasion. The obtained insights do not only advance our theoretical understanding of how disclosures of sponsored blogs affect persuasion but also provide valuable insights for legislators, advertisers, and bloggers. PMID:27721511

  4. Algorithmic Methods for Sponsored Search Advertising

    CERN Document Server

    Feldman, Jon

    2008-01-01

    Modern commercial Internet search engines display advertisements along side the search results in response to user queries. Such sponsored search relies on market mechanisms to elicit prices for these advertisements, making use of an auction among advertisers who bid in order to have their ads shown for specific keywords. We present an overview of the current systems for such auctions and also describe the underlying game-theoretic aspects. The game involves three parties--advertisers, the search engine, and search users--and we present example research directions that emphasize the role of each. The algorithms for bidding and pricing in these games use techniques from three mathematical areas: mechanism design, optimization, and statistical estimation. Finally, we present some challenges in sponsored search advertising.

  5. [Sponsoring of physicians in private practice].

    Science.gov (United States)

    Rieger, Hans-Jürgen

    2005-04-01

    The financing of advanced medical training for physicians by the pharmaceutical industry has been the subject of legal discussions for more than two decades. Recent legal changes have renewed the importance of industry sponsoring. At the 106th national convention of the German physicians, the model ordinance for the German medical profession ("Musterberufsordnung für die deutschen Arztinnen und Arzte-MBO-A") has been reformed, and for the first time individual physicians are now permitted, under certain circumstances, to receive financial support from sponsors to participate in medical-training events. A recent legal reform to modernize the healthcare system ("GKV-Modernisierungsgesetz"--GMG) obliges physicians to observe the law that regulates advertising of medicinal products ("Heilmittelwerbegesetz"--HWG); consequently, the physicians can commit a misdemeanor when accepting prohibited financial support. This essay discusses the implications of this legal reform for the most important types of commercially-sponsored medical training. The GMG reform has introduced an obligation for physicians to absolve continuous medical training, however the resulting legal situation has not changed the requirement that this training remain free of commercial interests.

  6. The Complex Dynamics of Sponsored Search Markets

    Science.gov (United States)

    Robu, Valentin; La Poutré, Han; Bohte, Sander

    This paper provides a comprehensive study of the structure and dynamics of online advertising markets, mostly based on techniques from the emergent discipline of complex systems analysis. First, we look at how the display rank of a URL link influences its click frequency, for both sponsored search and organic search. Second, we study the market structure that emerges from these queries, especially the market share distribution of different advertisers. We show that the sponsored search market is highly concentrated, with less than 5% of all advertisers receiving over 2/3 of the clicks in the market. Furthermore, we show that both the number of ad impressions and the number of clicks follow power law distributions of approximately the same coefficient. However, we find this result does not hold when studying the same distribution of clicks per rank position, which shows considerable variance, most likely due to the way advertisers divide their budget on different keywords. Finally, we turn our attention to how such sponsored search data could be used to provide decision support tools for bidding for combinations of keywords. We provide a method to visualize keywords of interest in graphical form, as well as a method to partition these graphs to obtain desirable subsets of search terms.

  7. Non-preconditioned conjugate gradient on cell and FPCA-based hybrid supercomputer nodes

    Energy Technology Data Exchange (ETDEWEB)

    Dubois, David H [Los Alamos National Laboratory; Dubois, Andrew J [Los Alamos National Laboratory; Boorman, Thomas M [Los Alamos National Laboratory; Connor, Carolyn M [Los Alamos National Laboratory

    2009-03-10

    This work presents a detailed implementation of a double precision, Non-Preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{trademark} in conjunction with x86 Opteron{trademark} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  8. Non-preconditioned conjugate gradient on cell and FPGA based hybrid supercomputer nodes

    Energy Technology Data Exchange (ETDEWEB)

    Dubois, David H [Los Alamos National Laboratory; Dubois, Andrew J [Los Alamos National Laboratory; Boorman, Thomas M [Los Alamos National Laboratory; Connor, Carolyn M [Los Alamos National Laboratory

    2009-01-01

    This work presents a detailed implementation of a double precision, non-preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{sup TM} in conjunction with x86 Opteron{sup TM} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  9. Programming Environment for a High-Performance Parallel Supercomputer with Intelligent Communication

    Directory of Open Access Journals (Sweden)

    A. Gunzinger

    1996-01-01

    Full Text Available At the Electronics Laboratory of the Swiss Federal Institute of Technology (ETH in Zürich, the high-performance parallel supercomputer MUSIC (MUlti processor System with Intelligent Communication has been developed. As applications like neural network simulation and molecular dynamics show, the Electronics Laboratory supercomputer is absolutely on par with those of conventional supercomputers, but electric power requirements are reduced by a factor of 1,000, weight is reduced by a factor of 400, and price is reduced by a factor of 100. Software development is a key issue of such parallel systems. This article focuses on the programming environment of the MUSIC system and on its applications.

  10. Chapman Conference on Rainfall Fields

    Science.gov (United States)

    Gupta, V. K.

    The Chapman Conference on Rainfall Fields, sponsored by AGU, was the first of its kind; it was devoted to strengthening scientific interaction between the North American and Latin American geophysics communities. It was hosted by Universidad Simon Bolivar and Instituto Internacional de Estudios Avanzados, in Caracas, Venezuela, during March 24-27, 1986. A total of 36 scientists from Latin America, the United States, Canada, and Europe participated. The conference, which was convened by I. Rodriguez-Iturbe (Universidad Simon Bolivar) and V. K. Gupta (University of Mississippi, University), brought together hydrologists, meteorologists, and mathematicians/statisticians in the name of enhancing an interdisciplinary focus on rainfall research.

  11. First Stars III Conference Summary

    CERN Document Server

    O'Shea, Brian W; Heger, Alexander; Abel, Tom

    2008-01-01

    The understanding of the formation, life, and death of Population III stars, as well as the impact that these objects had on later generations of structure formation, is one of the foremost issues in modern cosmological research and has been an active area of research during the past several years. We summarize the results presented at "First Stars III," a conference sponsored by Los Alamos National Laboratory, the Kavli Institute for Particle Astrophysics and Cosmology, and the Joint Institute for Nuclear Astrophysics. This conference, the third in a series, took place in July 2007 at the La Fonda Hotel in Santa Fe, New Mexico, U.S.A.

  12. 48 CFR 35.017-1 - Sponsoring agreements.

    Science.gov (United States)

    2010-10-01

    ... of the FFRDC's relationship with its sponsor(s). (3) A provision for the identification of retained earnings (reserves) and the development of a plan for their use and disposition. (4) A prohibition against...

  13. International Nuclear Physics Conference

    CERN Document Server

    2016-01-01

    We are pleased to announce that the 26th International Nuclear Physics Conference (INPC2016) will take place in Adelaide (Australia) from September 11-16, 2016. The 25th INPC was held in Firenze in 2013 and the 24th INPC in Vancouver, Canada, in 2010. The Conference is organized by the Centre for the Subatomic Structure of Matter at the University of Adelaide, together with the Australian National University and ANSTO. It is also sponsored by the International Union of Pure and Applied Physics (IUPAP) and by a number of organisations, including AUSHEP, BNL, CoEPP, GSI and JLab. INPC 2016 will be held in the heart of Adelaide at the Convention Centre on the banks of the River Torrens. It will consist of 5 days of conference presentations, with plenary sessions in the mornings, up to ten parallel sessions in the afternoons, poster sessions and a public lecture. The Conference will officially start in the evening of Sunday 11th September with Registration and a Reception and will end late on the afternoon of ...

  14. Successful the 4th Global Foundry Sourcing Conference 2009

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    @@ The 4th Global Foundry Sourcing Conference (FSC) 2009 was held at Rainbow Hotel Shanghai from April 16 to 17,2009. The FSC Conference was organized by China Foundry Suppliers Union and Suppliers China Information Consultation Co. Ltd. (SC),and co-sponsored by National Technical Committee 54 on Foundry of Standardization Administration of China.

  15. Reflections on Piaget. Proceedings of the Jean Piaget Memorial Conference.

    Science.gov (United States)

    Broughton, John M., Ed.; And Others

    1981-01-01

    Sessions of and presentations given at a memorial conference, held in honor of Jean Piaget, are reported. The conference was sponsored by the Developmental Psychology Program at Teachers College, Columbia University, on November 14, 1980. Sixteen scholars from the fields of psychology, philosophy, and education participated. (CJ)

  16. Minimum Competency Testing: A Report of Four Regional Conferences.

    Science.gov (United States)

    Miller, Barbara Soloth, Ed.

    Summaries are presented of the problems and issues of minimum competency testing which were discussed at four regional conferences for political and educational practitioners. The Education Commission of the States sponsored the conferences, but took no official position of support or opposition. Section 1 compares the legislative, practitioner,…

  17. Reflections on Piaget. Proceedings of the Jean Piaget Memorial Conference.

    Science.gov (United States)

    Broughton, John M., Ed.; And Others

    1981-01-01

    Sessions of and presentations given at a memorial conference, held in honor of Jean Piaget, are reported. The conference was sponsored by the Developmental Psychology Program at Teachers College, Columbia University, on November 14, 1980. Sixteen scholars from the fields of psychology, philosophy, and education participated. (CJ)

  18. Proceedings of the 23rd Southern Forest Tree Improvement Conference

    Science.gov (United States)

    Robert J. Weir; Alice V. Hatcher; [Compilers

    1995-01-01

    The 23rd Southern Forest Tree Improvement Conference was held at the Holiday Inn SunSpree Resort in Asheville, North Carolina. The Conference was sponsored by the Southern Forest Tree Improvement Committee and hosted by the N. C. State University-Industry Cooperative Tree Improvement Program. A total of 37 presentations, three invited and 34 voluntary, were given....

  19. Requirements for supercomputing in energy research: The transition to massively parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    1993-02-01

    This report discusses: The emergence of a practical path to TeraFlop computing and beyond; requirements of energy research programs at DOE; implementation: supercomputer production computing environment on massively parallel computers; and implementation: user transition to massively parallel computing.

  20. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Supercomputing plays a major role in many areas of science and engineering, and it has had tremendous impact for decades in areas such as aerospace, defense, energy,...

  1. DOE-EERC jointly sponsored research program

    Energy Technology Data Exchange (ETDEWEB)

    Hendrikson, J.G.; Sondreal, E.A.

    1999-09-01

    U.S. Department of Energy (DOE) Cooperative Agreement DE-FC21-93MC30098 funded through the Office of Fossil Energy and administered at the Federal Energy Technology Center (FETC) supported the performance of a Jointly Sponsored Research Program (JSRP) at the Energy and Environmental Research Center (EERC) with a minimum 50% nonfederal cost share to assist industry in commercializing and effectively applying efficient, nonpolluting energy technologies that can compete effectively in meeting market demands for clean fuels, chemical feedstocks, and electricity in the 21st century. The objective of the JSRP was to advance the deployment of advanced technologies for improving energy efficiency and environmental performance through jointly sponsored research on topics that would not be adequately addressed by the private sector alone. Examples of such topics include the barriers to hot-gas cleaning impeding the deployment of high-efficiency power systems and the search for practical means for sequestering CO{sub 2} generated by fossil fuel combustion. The selection of particular research projects was guided by a combination of DOE priorities and market needs, as provided by the requirement for joint venture funding approved both by DOE and the private sector sponsor. The research addressed many different energy resource and related environmental problems, with emphasis directed toward the EERC's historic lead mission in low-rank coals (LRCs), which represent approximately half of the U.S. coal resources in the conterminous states, much larger potential resources in Alaska, and a major part of the energy base in the former U.S.S.R., East Central Europe, and the Pacific Rim. The Base and JSRP agreements were tailored to the growing awareness of critical environmental issues, including water supply and quality, air toxics (e.g., mercury), fine respirable particulate matter (PM{sub 2.5}), and the goal of zero net CO{sub 2} emissions.

  2. 48 CFR 235.017-1 - Sponsoring agreements.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Sponsoring agreements. 235.017-1 Section 235.017-1 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM... Sponsoring agreements. (c)(4) DoD-sponsoring FFRDCs that function primarily as research laboratories (C3I...

  3. 45 CFR 233.51 - Eligibility of sponsored aliens.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 2 2010-10-01 2010-10-01 false Eligibility of sponsored aliens. 233.51 Section... CONDITIONS OF ELIGIBILITY IN FINANCIAL ASSISTANCE PROGRAMS § 233.51 Eligibility of sponsored aliens... affidavit(s) of support or similar agreement on behalf of an alien (who is not the child of the sponsor or...

  4. Ethos and Vision Realization in Sponsored Academy Schools

    Science.gov (United States)

    Gibson, Mark T.

    2015-01-01

    This article investigates the realization of ethos and vision in the early stages of sponsored academy schools in England. It is a qualitative nested case study of ten academies. Nineteen key actors were interviewed, including principals and sponsor representatives. The nests were organized by sponsor type. Key themes are discussed within the…

  5. Ethos and Vision Realization in Sponsored Academy Schools

    Science.gov (United States)

    Gibson, Mark T.

    2015-01-01

    This article investigates the realization of ethos and vision in the early stages of sponsored academy schools in England. It is a qualitative nested case study of ten academies. Nineteen key actors were interviewed, including principals and sponsor representatives. The nests were organized by sponsor type. Key themes are discussed within the…

  6. Audi cars Sponsored to Summer Davos

    Institute of Scientific and Technical Information of China (English)

    You Wanlong

    2009-01-01

    @@ During the upcoming Summer Davos in Asia this year,it is not only the grand affair to Dalian city,but also for the Audi cars,as the exclusive vehicle sunnlier for the event The Vehicle Handover Ceremony of the Faw Volkswagen Audi was held at Xinghai Square,Dalian,China,September 3,2009. Faw Volkswa,vehicle sponsor of the Summer Davos 2009,provides a total of 85 new Audi cars to be running for the distinguished guests during the meeting.

  7. SUPERCOMPUTERS FOR AIDING ECONOMIC PROCESSES WITH REFERENCE TO THE FINANCIAL SECTOR

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2014-12-01

    Full Text Available The article discusses the use of supercomputers to support business processes with particular emphasis on the financial sector. A reference was made to the selected projects that support economic development. In particular, we propose the use of supercomputers to perform artificial intel-ligence methods in banking. The proposed methods combined with modern technology enables a significant increase in the competitiveness of enterprises and banks by adding new functionality.

  8. A novel VLSI processor architecture for supercomputing arrays

    Science.gov (United States)

    Venkateswaran, N.; Pattabiraman, S.; Devanathan, R.; Ahmed, Ashaf; Venkataraman, S.; Ganesh, N.

    1993-01-01

    Design of the processor element for general purpose massively parallel supercomputing arrays is highly complex and cost ineffective. To overcome this, the architecture and organization of the functional units of the processor element should be such as to suit the diverse computational structures and simplify mapping of complex communication structures of different classes of algorithms. This demands that the computation and communication structures of different class of algorithms be unified. While unifying the different communication structures is a difficult process, analysis of a wide class of algorithms reveals that their computation structures can be expressed in terms of basic IP,IP,OP,CM,R,SM, and MAA operations. The execution of these operations is unified on the PAcube macro-cell array. Based on this PAcube macro-cell array, we present a novel processor element called the GIPOP processor, which has dedicated functional units to perform the above operations. The architecture and organization of these functional units are such to satisfy the two important criteria mentioned above. The structure of the macro-cell and the unification process has led to a very regular and simpler design of the GIPOP processor. The production cost of the GIPOP processor is drastically reduced as it is designed on high performance mask programmable PAcube arrays.

  9. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Ryabinkin, E.; Wenaus, T.

    2016-02-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed Analysis)Workload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF), is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF's Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  10. Developing and Deploying Advanced Algorithms to Novel Supercomputing Hardware

    CERN Document Server

    Brunner, Robert J; Myers, Adam D

    2007-01-01

    The objective of our research is to demonstrate the practical usage and orders of magnitude speedup of real-world applications by using alternative technologies to support high performance computing. Currently, the main barrier to the widespread adoption of this technology is the lack of development tools and case studies that typically impede non-specialists that might otherwise develop applications that could leverage these technologies. By partnering with the Innovative Systems Laboratory at the National Center for Supercomputing, we have obtained access to several novel technologies, including several Field-Programmable Gate Array (FPGA) systems, NVidia Graphics Processing Units (GPUs), and the STI Cell BE platform. Our goal is to not only demonstrate the capabilities of these systems, but to also serve as guides for others to follow in our path. To date, we have explored the efficacy of the SRC-6 MAP-C and MAP-E and SGI RASC Athena and RC100 reconfigurable computing platforms in supporting a two-point co...

  11. Numerical infinities and infinitesimals in a new supercomputing framework

    Science.gov (United States)

    Sergeyev, Yaroslav D.

    2016-06-01

    Traditional computers are able to work numerically with finite numbers only. The Infinity Computer patented recently in USA and EU gets over this limitation. In fact, it is a computational device of a new kind able to work numerically not only with finite quantities but with infinities and infinitesimals, as well. The new supercomputing methodology is not related to non-standard analysis and does not use either Cantor's infinite cardinals or ordinals. It is founded on Euclid's Common Notion 5 saying `The whole is greater than the part'. This postulate is applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). It is shown that it becomes possible to write down finite, infinite, and infinitesimal numbers by a finite number of symbols as numerals belonging to a positional numeral system with an infinite radix described by a specific ad hoc introduced axiom. Numerous examples of the usage of the introduced computational tools are given during the lecture. In particular, algorithms for solving optimization problems and ODEs are considered among the computational applications of the Infinity Computer. Numerical experiments executed on a software prototype of the Infinity Computer are discussed.

  12. Micro-mechanical Simulations of Soils using Massively Parallel Supercomputers

    Directory of Open Access Journals (Sweden)

    David W. Washington

    2004-06-01

    Full Text Available In this research a computer program, Trubal version 1.51, based on the Discrete Element Method was converted to run on a Connection Machine (CM-5,a massively parallel supercomputer with 512 nodes, to expedite the computational times of simulating Geotechnical boundary value problems. The dynamic memory algorithm in Trubal program did not perform efficiently in CM-2 machine with the Single Instruction Multiple Data (SIMD architecture. This was due to the communication overhead involving global array reductions, global array broadcast and random data movement. Therefore, a dynamic memory algorithm in Trubal program was converted to a static memory arrangement and Trubal program was successfully converted to run on CM-5 machines. The converted program was called "TRUBAL for Parallel Machines (TPM." Simulating two physical triaxial experiments and comparing simulation results with Trubal simulations validated the TPM program. With a 512 nodes CM-5 machine TPM produced a nine-fold speedup demonstrating the inherent parallelism within algorithms based on the Discrete Element Method.

  13. Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    Science.gov (United States)

    Fluke, Christopher J.; Barnes, David G.; Barsdell, Benjamin R.; Hassan, Amr H.

    2011-01-01

    General-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.

  14. Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    CERN Document Server

    Fluke, Christopher J; Barsdell, Benjamin R; Hassan, Amr H

    2010-01-01

    General purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplyfing the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best-practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks, and make the investment of time and effort to become early adopters of GPGPU in astronomy, s...

  15. Developing Fortran Code for Kriging on the Stampede Supercomputer

    Science.gov (United States)

    Hodgess, Erin

    2016-04-01

    Kriging is easily accessible in the open source statistical language R (R Core Team, 2015) in the gstat (Pebesma, 2004) package. It works very well, but can be slow on large data sets, particular if the prediction space is large as well. We are working on the Stampede supercomputer at the Texas Advanced Computing Center to develop code using a combination of R and the Message Passage Interface (MPI) bindings to Fortran. We have a function similar to the autofitVariogram found in the automap (Hiemstra {et al}, 2008) package and it is very effective. We are comparing R with MPI/Fortran, MPI/Fortran alone, and R with the Rmpi package, which uses bindings to C. We will present results from simulation studies and real-world examples. References: Hiemstra, P.H., Pebesma, E.J., Twenhofel, C.J.W. and G.B.M. Heuvelink, 2008. Real-time automatic interpolation of ambient gamma dose rates from the Dutch Radioactivity Monitoring Network. Computers and Geosciences, accepted for publication. Pebesma, E.J., 2004. Multivariable geostatistics in S: the gstat package. Computers and Geosciences, 30: 683-691. R Core Team, 2015. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.

  16. Using the multistage cube network topology in parallel supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Siegel, H.J.; Nation, W.G. (Purdue Univ., Lafayette, IN (USA). School of Electrical Engineering); Kruskal, C.P. (Maryland Univ., College Park, MD (USA). Dept. of Computer Science); Napolitano, L.M. Jr. (Sandia National Labs., Livermore, CA (USA))

    1989-12-01

    A variety of approaches to designing the interconnection network to support communications among the processors and memories of supercomputers employing large-scale parallel processing have been proposed and/or implemented. These approaches are often based on the multistage cube topology. This topology is the subject of much ongoing research and study because of the ways in which the multistage cube can be used. The attributes of the topology that make it useful are described. These include O(N log{sub 2} N) cost for an N input/output network, decentralized control, a variety of implementation options, good data permuting capability to support single instruction stream/multiple data stream (SIMD) parallelism, good throughput to support multiple instruction stream/multiple data stream (MIMD) parallelism, and ability to be partitioned into independent subnetworks to support reconfigurable systems. Examples of existing systems that use multistage cube networks are overviewed. The multistage cube topology can be converted into a single-stage network by associating with each switch in the network a processor (and a memory). Properties of systems that use the multistage cube network in this way are also examined.

  17. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Directory of Open Access Journals (Sweden)

    De K.

    2016-01-01

    Full Text Available The Large Hadron Collider (LHC, operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed AnalysisWorkload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF, is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF’s Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  18. Supercomputers ready for use as discovery machines for neuroscience

    Directory of Open Access Journals (Sweden)

    Moritz eHelias

    2012-11-01

    Full Text Available NEST is a widely used tool to simulate biological spiking neural networks. Here we explain theimprovements, guided by a mathematical model of memory consumption, that enable us to exploitfor the first time the computational power of the K supercomputer for neuroscience. Multi-threadedcomponents for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling.K is capable of simulating networks corresponding to a brain area with 10^8 neurons and 10^12 synapsesin the worst case scenario of random connectivity; for larger networks of the brain its hierarchicalorganization can be exploited to constrain the number of communicating computer nodes. Wediscuss the limits of the software technology, comparing maximum-□lling scaling plots for K andthe JUGENE BG/P system. The usability of these machines for network simulations has becomecomparable to running simulations on a single PC. Turn-around times in the range of minutes evenfor the largest systems enable a quasi-interactive working style and render simulations on this scalea practical tool for computational neuroscience.

  19. Supercomputers ready for use as discovery machines for neuroscience.

    Science.gov (United States)

    Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus

    2012-01-01

    NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 10(8) neurons and 10(12) synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience.

  20. International Conference on Intelligence and Learning

    CERN Document Server

    Das, J; O’Connor, Neil

    1981-01-01

    This volume contains the Proceedings of an International Conference on Intelligence and Learning held at York University, England, on July 16-20, 1979. The conference was made possible with the support and assistance of the following agencies: NAT 0 Scientific Division, specifically the Human Factors panel, was the major sponsor of the conference. Special thanks are due to Dr. B. A. Bayraktar, who helped organize the conference. Special appreciation is also expressed for the support of the University of York where the conference was held, the University of Alberta, the University of California, Los Angeles, the Medical Research Council, especially its Developmental Psychology Research U nit in London, and the British Council. The conference was jointly directed by J. P. Das and N. 0' Connor. The directors appreciate the assistance in administrative matters of Patricia Chobater and Emma Collins of the University of Alberta. The Editors of the Proceedings acknowledge and appreciate the following individuals who...

  1. Conference Discusses China’s Changing Families

    Institute of Scientific and Technical Information of China (English)

    1994-01-01

    MARCH 26, 1994, a seminar titled "China’s Traditional Culture and Changes in the Family" was convened at the Hong Kong International Conference Center. The seminar was co-sponsored by the Women’s Institute of the All-China Women’s Federation and the Hong Kong Women’s Foundation.

  2. 1987 Oak Ridge model conference: Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    1987-10-01

    A conference sponsored by the United States Department of Energy (DOE), was held on waste management. Topics included waste management, site remediation, waste minimization, economic and social aspects of waste management, and waste management training. Several case studies of US DOE facilities are included. Individual projects are processed separately for the data bases. (CBS)

  3. [Third-party funding and industrial sponsoring].

    Science.gov (United States)

    Bock, R-W

    2003-11-01

    Due to sensational publications, media-effective proceedings and not least due to personal involvement in many cases the medical profession became aware of the potential criminal significance of third-party funding and industrial sponsoring in hospitals. It started with the so-called Heart-Valve-Affair ("Herzklappenskandal") in 1994. Since then, insecurity prevails which was further nourished by a new series of legal proceedings in spring 2002. Industrial sponsoring has been practised in hospitals for a long time. Research work at universities and colleges would hardly be possible without third-party funding, which is accepted de lege lata according to section sign 25 of the German Hochschulrahmengesetz (framework law on universities and colleges). In view of the course of action of the criminal prosecution authorities the question came up which precautions must be taken with regard to grants by the medical-pharmaceutical industry to avoid consequences from the point of view of criminal law.[nl]There are various facts and circumstances in relation with these benefits, such as providing financing of educational training, payments related to the participation in congresses, speaker's fees, consultancy agreements, clinical tests and application evaluation, provision of equipment and staff as well as providing financing thereof, donations to medical facilities and so-called Fördervereine (supporting associations), benefits for research projects, etc.

  4. What can Bilfinger teach Olympic sponsors?

    Directory of Open Access Journals (Sweden)

    Mark Dodds

    2016-10-01

    Full Text Available Bilfinger SE (Bilfinger is a leading international engineering and services group (Bilfinger.com, 2015, and was a local sponsor of the 2014 FIFA World Cup. The company is accused of paying bribes through its subsidiary company, Mauell, (dw. com, 2015 to public officials in Brazil for contracts related to the 2014 World Cup (Cassin, 2015. The corruption allegations relate to orders to equip security command centers at twelve host cities during the 2014 World Cup in Brazil (dw.com, 2015. Because Brazil hosted the 2014 FIFA World Cup and will host the 2016 Summer Olympic Games, companies need to consider the risks of many international anti-corruption laws, such as Brazil’s anti-corruption law commonly referred as The Clean Companies Act and other applicable anticorruption law like the United States’ Foreign Corrupt Practices Act (Rogers, et. al, 2014. This paper will analyze the Bilfinger case involving corruption activity at the 2014 FIFA World Cup and offer insights for sponsors of the 2016 Summer Olympic Games.

  5. Sponsored Search Auctions with Markovian Users

    CERN Document Server

    Aggarwal, Gagan; Muthukrishnan, S; Pal, Martin

    2008-01-01

    Sponsored search involves running an auction among advertisers who bid in order to have their ad shown next to search results for specific keywords. Currently, the most popular auction for sponsored search is the "Generalized Second Price" (GSP) auction in which advertisers are assigned to slots in the decreasing order of their "score," which is defined as the product of their bid and click-through rate. In the past few years, there has been significant research on the game-theoretic issues that arise in an advertiser's interaction with the mechanism as well as possible redesigns of the mechanism, but this ranking order has remained standard. From a search engine's perspective, the fundamental question is: what is the best assignment of advertisers to slots? Here "best" could mean "maximizing user satisfaction," "most efficient," "revenue-maximizing," "simplest to interact with," or a combination of these. To answer this question we need to understand the behavior of a search engine user when she sees the dis...

  6. Fire and the related effects of nuclear explosions. 1982 Asilomar Conference

    Energy Technology Data Exchange (ETDEWEB)

    Martin, S.B.; Alger, R.S. (eds.)

    1982-11-01

    This report summarizes the proceedings of a Federal Emergency Management Agency-sponsored Conference on fire and the related effects of nuclear explosions (with passing attention to earthquakes and other nonnuclear mishaps). This conference, the fifth of an annual series (formally called Blast/Fire Interaction Conferences), was held during the week of April 25, 1982, again at Asilomar, California.

  7. Proceedings of the 28th Annual Farmingdale State College Teaching of Psychology: Ideas and Innovations Conference

    Science.gov (United States)

    Howell-Carter, Marya, Ed.; Gonder, Jennifer, Ed.

    2014-01-01

    Proceedings of the 28th Annual Conference on the Teaching of Psychology: Ideas and Innovations, sponsored by the Psychology Department of Farmingdale State College. The conference theme for 2014 was:" Infusing Issues of Racial, Religious, and Sexuality Diversity Across the Undergraduate Curriculum." The Conference featured a keynote…

  8. Beware: this is sponsored! How disclosures of sponsored content affect persuasion knowledge and brand responses

    NARCIS (Netherlands)

    Boerman, S.; van Reijmersdal, E.; Neijens, P.

    2012-01-01

    This study examined how disclosure of sponsored content influences persuasion knowledge and brand responses (i.e., brand memory and brand attitude). Moreover, we tested whether extending disclosure duration increases its effect. We conducted an experiment (N = 116) in which we compared the effects o

  9. SIAM conference on applications of dynamical systems

    Energy Technology Data Exchange (ETDEWEB)

    1992-01-01

    A conference (Oct.15--19, 1992, Snowbird, Utah; sponsored by SIAM (Society for Industrial and Applied Mathematics) Activity Group on Dynamical Systems) was held that highlighted recent developments in applied dynamical systems. The main lectures and minisymposia covered theory about chaotic motion, applications in high energy physics and heart fibrillations, turbulent motion, Henon map and attractor, integrable problems in classical physics, pattern formation in chemical reactions, etc. The conference fostered an exchange between mathematicians working on theoretical issues of modern dynamical systems and applied scientists. This two-part document contains abstracts, conference program, and an author index.

  10. Conference Committees: Conference Committees

    Science.gov (United States)

    2009-09-01

    International Programm Committee (IPC) Harald Ade NCSU Sadao Aoki University Tsukuba David Attwood Lawrence Berkeley National Laboratory/CXRO Christian David Paul Scherrer Institut Peter Fischer Lawrence Berkeley National Laboratory Adam Hitchcock McMaster University Chris Jacobsen SUNY, Stony Brook Denis Joyeux Lab Charles Fabry de l'Institut d'Optique Yasushi Kagoshima University of Hyogo Hiroshi Kihara Kansai Medical University Janos Kirz SUNY Stony Brook Maya Kiskinova ELETTRA Ian McNulty Argonne National Lab/APS Alan Michette Kings College London Graeme Morrison Kings College London Keith Nugent University of Melbourne Zhu Peiping BSRF Institute of High Energy Physics Francois Polack Soleil Christoph Quitmann Paul Scherrer Institut Günther Schmahl University Göttingen Gerd Schneider Bessy Hyun-Joon Shin Pohang Accelerator Lab Jean Susini ESRF Mau-Tsu Tang NSRRC Tony Warwick Lawrence Berkeley Lab/ALS Local Organizing Committee Christoph Quitmann Chair, Scientific Program Charlotte Heer Secretary Christian David Scientific Program Frithjof Nolting Scientific Program Franz Pfeiffer Scientific Program Marco Stampanoni Scientific Program Robert Rudolph Sponsoring, Financials Alfred Waser Industry Exhibition Robert Keller Public Relation Markus Knecht Computing and WWW Annick Cavedon Proceedings and Excursions and Accompanying Persons Program Margrit Eichler Excursions and Accompanying Persons Program Kathy Eikenberry Excursions and Accompanying Persons Program Marlies Locher Excursions and Accompanying Persons Program

  11. Jointly Sponsored Research Program Energy Related Research

    Energy Technology Data Exchange (ETDEWEB)

    Western Research Institute

    2009-03-31

    Cooperative Agreement, DE-FC26-98FT40323, Jointly Sponsored Research (JSR) Program at Western Research Institute (WRI) began in 1998. Over the course of the Program, a total of seventy-seven tasks were proposed utilizing a total of $23,202,579 in USDOE funds. Against this funding, cosponsors committed $26,557,649 in private funds to produce a program valued at $49,760,228. The goal of the Jointly Sponsored Research Program was to develop or assist in the development of innovative technology solutions that will: (1) Increase the production of United States energy resources - coal, natural gas, oil, and renewable energy resources; (2) Enhance the competitiveness of United States energy technologies in international markets and assist in technology transfer; (3) Reduce the nation's dependence on foreign energy supplies and strengthen both the United States and regional economies; and (4) Minimize environmental impacts of energy production and utilization. Under the JSR Program, energy-related tasks emphasized enhanced oil recovery, heavy oil upgrading and characterization, coal beneficiation and upgrading, coal combustion systems development including oxy-combustion, emissions monitoring and abatement, coal gasification technologies including gas clean-up and conditioning, hydrogen and liquid fuels production, coal-bed methane recovery, and the development of technologies for the utilization of renewable energy resources. Environmental-related activities emphasized cleaning contaminated soils and waters, processing of oily wastes, mitigating acid mine drainage, and demonstrating uses for solid waste from clean coal technologies, and other advanced coal-based systems. Technology enhancement activities included resource characterization studies, development of improved methods, monitors and sensors. In general the goals of the tasks proposed were to enhance competitiveness of U.S. technology, increase production of domestic resources, and reduce environmental

  12. Topics in Number Theory Conference

    CERN Document Server

    Andrews, George; Ono, Ken

    1999-01-01

    From July 31 through August 3,1997, the Pennsylvania State University hosted the Topics in Number Theory Conference. The conference was organized by Ken Ono and myself. By writing the preface, I am afforded the opportunity to express my gratitude to Ken for beng the inspiring and driving force behind the whole conference. Without his energy, enthusiasm and skill the entire event would never have occurred. We are extremely grateful to the sponsors of the conference: The National Sci­ ence Foundation, The Penn State Conference Center and the Penn State Depart­ ment of Mathematics. The object in this conference was to provide a variety of presentations giving a current picture of recent, significant work in number theory. There were eight plenary lectures: H. Darmon (McGill University), "Non-vanishing of L-functions and their derivatives modulo p. " A. Granville (University of Georgia), "Mean values of multiplicative functions. " C. Pomerance (University of Georgia), "Recent results in primality testing. " C. ...

  13. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    Science.gov (United States)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  14. Modelling Emotional and Attitudinal Evaluations of Major Sponsors

    DEFF Research Database (Denmark)

    Martensen, Anne; Hansen, Flemming

    2004-01-01

    The paper reports findings from a larger study of sponsors and their relationship to sponsoredparties. In the present reporting, the focus is on sponsors. Rather than evaluating suchsponsorships in traditional effect hierarchical terms, a conceptual Sponsor Value Model isspecified as a structural...... equation model where the drivers are attitudes towards thesponsorship and emotions towards the sponsorship. It is found that the two classes ofvariables describe different aspects of the perception of sponsorships, and that they bothcontribute significantly to the overall value of sponsoring...... for a particular company. In thepresent paper, two cases are shown for two major sponsors. The specified Sponsor ValueModel is estimated by a partial least squares (PLS) method. It is found that the two sponsorsare perceived differently, both in terms of emotional and attitudinal responses. It is also foundthat...

  15. Cyberdyn supercomputer - a tool for imaging geodinamic processes

    Science.gov (United States)

    Pomeran, Mihai; Manea, Vlad; Besutiu, Lucian; Zlagnean, Luminita

    2014-05-01

    More and more physical processes developed within the deep interior of our planet, but with significant impact on the Earth's shape and structure, become subject to numerical modelling by using high performance computing facilities. Nowadays, worldwide an increasing number of research centers decide to make use of such powerful and fast computers for simulating complex phenomena involving fluid dynamics and get deeper insight to intricate problems of Earth's evolution. With the CYBERDYN cybernetic infrastructure (CCI), the Solid Earth Dynamics Department in the Institute of Geodynamics of the Romanian Academy boldly steps into the 21st century by entering the research area of computational geodynamics. The project that made possible this advancement, has been jointly supported by EU and Romanian Government through the Structural and Cohesion Funds. It lasted for about three years, ending October 2013. CCI is basically a modern high performance Beowulf-type supercomputer (HPCC), combined with a high performance visualization cluster (HPVC) and a GeoWall. The infrastructure is mainly structured around 1344 cores and 3 TB of RAM. The high speed interconnect is provided by a Qlogic InfiniBand switch, able to transfer up to 40 Gbps. The CCI storage component is a 40 TB Panasas NAS. The operating system is Linux (CentOS). For control and maintenance, the Bright Cluster Manager package is used. The SGE job scheduler manages the job queues. CCI has been designed for a theoretical peak performance up to 11.2 TFlops. Speed tests showed that a high resolution numerical model (256 × 256 × 128 FEM elements) could be resolved with a mean computational speed of 1 time step at 30 seconds, by employing only a fraction of the computing power (20%). After passing the mandatory tests, the CCI has been involved in numerical modelling of various scenarios related to the East Carpathians tectonic and geodynamic evolution, including the Neogene magmatic activity, and the intriguing

  16. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to

  17. Jointly Sponsored Research Program. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-07-01

    The Jointly Sponsored Research Program (JSRP) is a US Department of Energy (DOE) program funded through the Office of Fossil Energy and administered at the Morgantown Energy Technology Center. Under this program, which has been in place since Fiscal Year 1990, DOE makes approximately $2.5 million available each year to the Energy and Environmental Research Center (EERC) to fund projects that are of current interest to industry but which still involve significant risk, thus requiring some government contribution to offset the risk if the research is to move forward. The program guidelines require that at least 50% of the project funds originate from nonfederal sources. Projects funded under the JSRP often originate under a complementary base program, which funds higher-risk projects. The projects funded in Fiscal Year 1996 addressed a wide range of Fossil Energy interests, including hot-gas filters for advanced power systems; development of cleaner, more efficient processing technologies; development of environmental control technologies; development of environmental remediation and reuse technologies; development of improved analytical techniques; and development of a beneficiation technique to broaden the use of high-sulfur coal. Descriptions and status for each of the projects funded during the past fiscal year are included in Section A of this document, Statement of Technical Progress.

  18. Data mining method for anomaly detection in the supercomputer task flow

    Science.gov (United States)

    Voevodin, Vadim; Voevodin, Vladimir; Shaikhislamov, Denis; Nikitenko, Dmitry

    2016-10-01

    The efficiency of most supercomputer applications is extremely low. At the same time, the user rarely even suspects that their applications may be wasting computing resources. Software tools need to be developed to help detect inefficient applications and report them to the users. We suggest an algorithm for detecting anomalies in the supercomputer's task flow, based on a data mining methods. System monitoring is used to calculate integral characteristics for every job executed, and the data is used as input for our classification method based on the Random Forest algorithm. The proposed approach can currently classify the application as one of three classes - normal, suspicious and definitely anomalous. The proposed approach has been demonstrated on actual applications running on the "Lomonosov" supercomputer.

  19. Nostradamus conference

    CERN Document Server

    Rössler, Otto; Snášel, Václav; Abraham, Ajith; Corchado, Emilio; Nostradamus: Modern Methods of Prediction, Modeling and Analysis of Nonlinear Systems

    2013-01-01

    This proceeding book of Nostradamus conference (http://nostradamus-conference.org) contains accepted papers presented at this event in 2012. Nostradamus conference was held in the one of the biggest and historic city of Ostrava (the Czech Republic, http://www.ostrava.cz/en), in September 2012. Conference topics are focused on classical as well as modern methods for prediction of dynamical systems with applications in science, engineering and economy. Topics are (but not limited to): prediction by classical and novel methods, predictive control, deterministic chaos and its control, complex systems, modelling and prediction of its dynamics and much more.

  20. AUDIOLOGY AND EDUCATION OF THE DEAF, A RESEARCH PROJECT AND TRAINING MANUAL SPONSORED BY THE JOINT COMMITTEE ON AUDIOLOGY AND EDUCATION OF THE DEAF.

    Science.gov (United States)

    VENTRY, IRA M.

    TO IMPROVE UNDERSTANDING BETWEEN AUDIOLOGISTS AND EDUCATORS OF THE DEAF, THE AMERICAN SPEECH AND HEARING ASSOCIATION AND THE CONFERENCE OF EXECUTIVES OF AMERICAN SCHOOLS FOR THE DEAF SPONSORED A TWO YEAR PROJECT. FIVE DIFFERENT QUESTIONNAIRES WERE SENT TO SPEECH AND HEARING CENTERS, SCHOOLS FOR THE DEAF, TEACHERS OF THE DEAF, AND AUDIOLOGISTS. THE…

  1. Reflecting on the Postgraduate Experience: Teaching Research Methods and Statistics: Review of the DART-P Sponsored Workshop at PsyPAG 2013

    Science.gov (United States)

    Jackson, Emma J.; Davies, Emma. L.

    2014-01-01

    Following the success of last year's teaching and career development workshop, this year's DART-P sponsored workshop at the Psychology Postgraduate Affairs Group (PsyPAG) Annual Conference held at Lancaster University focused on postgraduate's experiences of teaching research methods. This article provides a review of the invited speakers…

  2. Team sponsors in community-based health leadership programs.

    Science.gov (United States)

    Patterson, Tracy Enright; Dinkin, Donna R; Champion, Heather

    2017-05-02

    Purpose The purpose of this article is to share the lessons learned about the role of team sponsors in action-learning teams as part of community-based health leadership development programs. Design/methodology/approach This case study uses program survey results from fellow participants, action learning coaches and team sponsors to understand the value of sponsors to the teams, the roles they most often filled and the challenges they faced as team sponsors. Findings The extent to which the sponsors were perceived as having contributed to the work of the action learning teams varied greatly from team to team. Most sponsors agreed that they were well informed about their role. The roles sponsors most frequently played were to provide the teams with input and support, serve as a liaison to the community and serve as a sounding board, motivator and cheerleader. The most common challenges or barriers team sponsors faced in this role were keeping engaged in the process, adjusting to the role and feeling disconnected from the program. Practical implications This work provides insights for program developers and community foundations who are interested in building the capacity for health leadership by linking community sponsors with emerging leaders engaged in an action learning experience. Originality/value This work begins to fill a gap in the literature. The role of team sponsors has been studied for single organization work teams but there is a void of understanding about the role of sponsors with multi-organizational teams working to improve health while also learning about leadership.

  3. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  4. Guide to dataflow supercomputing basic concepts, case studies, and a detailed example

    CERN Document Server

    Milutinovic, Veljko; Trifunovic, Nemanja; Giorgi, Roberto

    2015-01-01

    This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; revie

  5. High-Performance Computing: Industry Uses of Supercomputers and High-Speed Networks. Report to Congressional Requesters.

    Science.gov (United States)

    General Accounting Office, Washington, DC. Information Management and Technology Div.

    This report was prepared in response to a request for information on supercomputers and high-speed networks from the Senate Committee on Commerce, Science, and Transportation, and the House Committee on Science, Space, and Technology. The following information was requested: (1) examples of how various industries are using supercomputers to…

  6. Blauwe ogen schieten tekort. Lessen voor sponsoring van landschap

    NARCIS (Netherlands)

    Overbeek, M.M.M.; Graaff, de R.P.M.

    2010-01-01

    Literatuuronderzoek en gesprekken met (ervarings)deskundigen en vertegenwoordigers van bedrijven in Amstelland en in Het Groene Woud over het proces en de voorwaarden van bedrijven om sponsoring van landschap te realiseren. Sponsoring gebeurt meestal in het kader van mvo, waarbij bedrijven de

  7. 7 CFR 225.9 - Program assistance to sponsors.

    Science.gov (United States)

    2010-01-01

    ... changes in the series for food away from home of the Consumer Price Index(CPI) for all urban consumers... sponsors information on available commodities. Sponsors shall use in the Program food donated by the... Program, and operate more than one child nutrition program under a single State agency, must use a...

  8. 75 FR 68972 - New Animal Drugs; Change of Sponsor's Name

    Science.gov (United States)

    2010-11-10

    ... HUMAN SERVICES Food and Drug Administration 21 CFR Part 510 New Animal Drugs; Change of Sponsor's Name... (FDA) is amending the animal drug regulations to reflect a change of sponsor's name from North American Nutrition Companies, Inc., to Provimi North America, Inc. DATES: This rule is effective November 10,...

  9. Conference Proceedings, the Education of Hispanics: "Issues for the 80's" (San Francisco, CA, January 15-18, 1980).

    Science.gov (United States)

    Martinez, Jesus "Metro", Ed.; Payan, Rose Marie, Ed.

    The conference on the education of Hispanics was one of five regional conferences sponsored by the U.S. Office of Education in conjuction with regional offices of education. Conference participants attempted to analyze the federal government's commitment to establishing and implementing equal educational opportunities for Hispanic students, and to…

  10. Documentation and Evaluation Study of the Texas Teacher Corps Network Program '78 Community Council Developmental Training Conference.

    Science.gov (United States)

    Leos, Robert; Olivarez, Ruben Dario

    This document describes a training conference sponsored by the Teacher Corps Network. Informational and skill development sessions for Community Council members from the Teacher Corps projects are included. A special planning session, held prior to the conference is described. A description is given of the conference planning and events. The…

  11. Consensus conferences

    DEFF Research Database (Denmark)

    Nielsen, Annika Porsborg; Lassen, Jesper

    Our results point to significant national variation both in terms of the perceived aim of consensus conferences, expectations to conference outcomes, conceptions of the roles of lay people and experts, and in terms of the way in which the role of public deliberation is interpreted. Interestingly...

  12. Evaluating alcoholics anonymous sponsor attributes using conjoint analysis.

    Science.gov (United States)

    Stevens, Edward B; Jason, Leonard A

    2015-12-01

    Alcoholics Anonymous (AA) considers sponsorship an important element of the AA program, especially in early recovery. 225 adult individuals who had experience as either a sponsor, sponsee, or both, participated in a hypothetical sponsor ranking exercise where five attributes were varied across three levels. Conjoint analysis was used to compute part-worth utility of the attributes and their levels for experience, knowledge, availability, confidentiality, and goal-setting. Differences in utilities by attribute were found where confidentiality had the greatest overall possible impact on utility and sponsor knowledge had the least. These findings suggest qualitative differences in sponsors may impact their effectiveness. Future research on AA should continue to investigate sponsor influence on an individual's overall recovery trajectory. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. PREFACE: IUPAP C20 Conference on Computational Physics (CCP 2011)

    Science.gov (United States)

    Troparevsky, Claudia; Stocks, George Malcolm

    2012-12-01

    Increasingly, computational physics stands alongside experiment and theory as an integral part of the modern approach to solving the great scientific challenges of the day on all scales - from cosmology and astrophysics, through climate science, to materials physics, and the fundamental structure of matter. Computational physics touches aspects of science and technology with direct relevance to our everyday lives, such as communication technologies and securing a clean and efficient energy future. This volume of Journal of Physics: Conference Series contains the proceedings of the scientific contributions presented at the 23rd Conference on Computational Physics held in Gatlinburg, Tennessee, USA, in November 2011. The annual Conferences on Computational Physics (CCP) are dedicated to presenting an overview of the most recent developments and opportunities in computational physics across a broad range of topical areas and from around the world. The CCP series has been in existence for more than 20 years, serving as a lively forum for computational physicists. The topics covered by this conference were: Materials/Condensed Matter Theory and Nanoscience, Strongly Correlated Systems and Quantum Phase Transitions, Quantum Chemistry and Atomic Physics, Quantum Chromodynamics, Astrophysics, Plasma Physics, Nuclear and High Energy Physics, Complex Systems: Chaos and Statistical Physics, Macroscopic Transport and Mesoscopic Methods, Biological Physics and Soft Materials, Supercomputing and Computational Physics Teaching, Computational Physics and Sustainable Energy. We would like to take this opportunity to thank our sponsors: International Union of Pure and Applied Physics (IUPAP), IUPAP Commission on Computational Physics (C20), American Physical Society Division of Computational Physics (APS-DCOMP), Oak Ridge National Laboratory (ORNL), Center for Defect Physics (CDP), the University of Tennessee (UT)/ORNL Joint Institute for Computational Sciences (JICS) and Cray, Inc

  14. Proceedings of the joint contractors meeting: FE/EE Advanced Turbine Systems conference FE fuel cells and coal-fired heat engines conference

    Energy Technology Data Exchange (ETDEWEB)

    Geiling, D.W. [ed.

    1993-08-01

    The joint contractors meeting: FE/EE Advanced Turbine Systems conference FEE fuel cells and coal-fired heat engines conference; was sponsored by the US Department of Energy Office of Fossil Energy and held at the Morgantown Energy Technology Center, P.O. Box 880, Morgantown, West Virginia 26507-0880, August 3--5, 1993. Individual papers have been entered separately.

  15. The Ramsar Conference, Final Act of the International Conference on the Conservation of Wetlands and Waterfowl (Ramsar, Iran, 30 January to 3 February 1971).

    Science.gov (United States)

    IUCN Bulletin, 1971

    1971-01-01

    The text of the Final Act of the International Conference on the Conservation of Wetlands and Waterfowl is presented in this pamphlet. The conference was convened by the Government of Iran at Ramsar, Iran, January 30 to February 3, 1971, to promote international collaboration in this field. It was sponsored by the International Wildfowl Research…

  16. Design and performance characterization of electronic structure calculations on massively parallel supercomputers

    DEFF Research Database (Denmark)

    Romero, N. A.; Glinsvad, Christian; Larsen, Ask Hjorth

    2013-01-01

    Density function theory (DFT) is the most widely employed electronic structure method because of its favorable scaling with system size and accuracy for a broad range of molecular and condensed-phase systems. The advent of massively parallel supercomputers has enhanced the scientific community's ...

  17. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  18. The impact of the U.S. supercomputing initiative will be global

    Energy Technology Data Exchange (ETDEWEB)

    Crawford, Dona [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-15

    Last July, President Obama issued an executive order that created a coordinated federal strategy for HPC research, development, and deployment called the U.S. National Strategic Computing Initiative (NSCI). However, this bold, necessary step toward building the next generation of supercomputers has inaugurated a new era for U.S. high performance computing (HPC).

  19. Congressional Panel Seeks To Curb Access of Foreign Students to U.S. Supercomputers.

    Science.gov (United States)

    Kiernan, Vincent

    1999-01-01

    Fearing security problems, a congressional committee on Chinese espionage recommends that foreign students and other foreign nationals be barred from using supercomputers at national laboratories unless they first obtain export licenses from the federal government. University officials dispute the data on which the report is based and find the…

  20. [Experience in simulating the structural and dynamic features of small proteins using table supercomputers].

    Science.gov (United States)

    Kondrat'ev, M S; Kabanov, A V; Komarov, V M; Khechinashvili, N N; Samchenko, A A

    2011-01-01

    The results of theoretical studies of the structural and dynamic features of peptides and small proteins have been presented that were carried out by quantum chemical and molecular dynamics methods in high-performance graphic stations, "table supercomputers", using distributed calculations by the CUDA technology.

  1. 4th International Joint Conference on Computational Intelligence

    CERN Document Server

    Correia, António; Rosa, Agostinho; Filipe, Joaquim

    2015-01-01

    The present book includes extended and revised versions of a set of selected papers from the Fourth International Joint Conference on Computational Intelligence (IJCCI 2012)., held in Barcelona, Spain, from 5 to 7 October, 2012. The conference was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and was organized in cooperation with the Association for the Advancement of Artificial Intelligence (AAAI). The conference brought together researchers, engineers and practitioners in computational technologies, especially those related to the areas of fuzzy computation, evolutionary computation and neural computation. It is composed of three co-located conferences, each one specialized in one of the aforementioned -knowledge areas. Namely: - International Conference on Evolutionary Computation Theory and Applications (ECTA) - International Conference on Fuzzy Computation Theory and Applications (FCTA) - International Conference on Neural Computation Theory a...

  2. 2010 Winter Conference on Plasma Spectrochemistry

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    @@ The 2010 Winter Conference on Plasma Spectrochemistry,sixteenth in a series of biennial meetings sponsored by the ICP Information Newsletter, features developments in plasma spectrochemical analysis by inductively coupled plasma (ICP), dc plasma (DCP), microwave plasma (MIP), glow discharge (GDL, HCL), and laser sources. The meeting will be held Monday, January 4 through Saturday, January 9, 2010, in Fort Myers, Florida (www. fortmyers-sanibel, corn) at the Sanibel Harbour Resort and Spa (www. sanibel-resort, com).

  3. 2013 Robotics Science & Systems Conference Travel Support

    Science.gov (United States)

    2015-01-21

    Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 robotics conference, travel support REPORT DOCUMENTATION PAGE 11. SPONSOR...NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. University of Washington 4333 Brooklyn AVE NE Box 359472 Seattle, WA 98195 -9472 31-May-2014 ABSTRACT...travel of invited speakers and students from the U.S. (a) Papers published in peer-reviewed journals (N/A for none) Enter List of papers submitted or

  4. Sponsorship recall and recognition of official sponsors of the 2010 ...

    African Journals Online (AJOL)

    Sponsorship recall and recognition of official sponsors of the 2010 FIFA World Cup™ ... which was the world's largest football event hosted in South Africa (SA), offered ... use the confusion in consumers to market their products and brands.

  5. Is Mail Service Pharmacy Cost Beneficial to Plan Sponsors?

    OpenAIRE

    Larisa Vulakh, Student Pharmacist; Albert I. Wertheimer, PhD, MBA

    2011-01-01

    The objective of this study was to describe and compare prescription drug costs charged to a plan sponsor for the top 50 maintenance medications provided through retail and mail service procurement channels. Data were obtained for covered beneficiaries of a health plan sponsored by an employer with just over 3,000 covered employees The analytics team at the PBM administering the plan sponsor’s prescription drug benefit provided de-identified claims information for the top 50 maintenance presc...

  6. Interactive steering of supercomputing simulation for aerodynamic noise radiated from square cylinder; Supercomputer wo mochiita steering system ni yoru kakuchu kara hoshasareru kurikion no suchi kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Yokono, Y. [Toshiba Corp., Tokyo (Japan); Fujita, H. [Tokyo Inst. of Technology, Tokyo (Japan). Precision Engineering Lab.

    1995-03-25

    This paper describes extensive computer simulation for aerodynamic noise radiated from a square cylinder using an interactive steering supercomputing simulation system. The unsteady incompressible three-dimensional Navier-Stokes equations are solved by the finite volume method using a steering system which can visualize the numerical process during calculation and alter the numerical parameter. Using the fluctuating surface pressure of the square cylinder, the farfield sound pressure is calculated based on Lighthill-Curle`s equation. The results are compared with those of low noise wind tunnel experiments, and good agreement is observed for the peak spectrum frequency of the sound pressure level. 14 refs., 10 figs.

  7. Paying for Pollution: Water Quality and Effluent Charges. Proceedings from a Conference (Chicago, Illinois, May 19, 1977).

    Science.gov (United States)

    Conservation Foundation, Washington, DC.

    This publication gives the proceedings from a 1977 conference sponsored by the Conservation Foundation. Participants discuss the appropriate means to control water pollution, emphasizing the use of effluent charges as economic incentive for polluters to clean up their waters. (MA)

  8. Paying for Pollution: Water Quality and Effluent Charges. Proceedings from a Conference (Chicago, Illinois, May 19, 1977).

    Science.gov (United States)

    Conservation Foundation, Washington, DC.

    This publication gives the proceedings from a 1977 conference sponsored by the Conservation Foundation. Participants discuss the appropriate means to control water pollution, emphasizing the use of effluent charges as economic incentive for polluters to clean up their waters. (MA)

  9. Risks and resolutions: the ‘day after’ for financial institutions - a conference summary

    OpenAIRE

    Carl R. Tannenbaum; Steven VanBever

    2009-01-01

    The Chicago Fed’s Supervision and Regulation Department, in conjunction with DePaul University’s Center for Financial Services, sponsored its second annual Financial Institutions Risk Management Conference on April 14–15, 2009. The conference focused on risk management, headline issues, and recent financial innovations.

  10. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  11. Microgravity Materials Science Conference 2000. Volume 3

    Science.gov (United States)

    Ramachandran, Narayanan; Bennett, Nancy; McCauley, Dannah; Murphy, Karen; Poindexter, Samantha

    2001-01-01

    This is Volume 3 of 3 of the 2000 Microgravity Materials Science Conference that was held June 6-8 at the Von Braun Center, Huntsville, Alabama. It was organized by the Microgravity Materials Science Discipline Working Group, sponsored by the Microgravity Research Division (MRD) at NASA Headquarters, and hosted by NASA Marshall Space Flight Center and the Alliance for Microgravity Materials Science and Applications (AMMSA). It was the fourth NASA conference of this type in the Microgravity materials science discipline. The microgravity science program sponsored 200 investigators, all of whom made oral or poster presentations at this conference- In addition, posters and exhibits covering NASA microgravity facilities, advanced technology development projects sponsored by the NASA Microgravity Research Division at NASA Headquarters, and commercial interests were exhibited. The purpose of the conference was to inform the materials science community of research opportunities in reduced gravity and to highlight the Spring 2001 release of the NASA Research Announcement (NRA) to solicit proposals for future investigations. It also served to review the current research and activities in material,, science, to discuss the envisioned long-term goals. and to highlight new crosscutting research areas of particular interest to MRD. The conference was aimed at materials science researchers from academia, industry, and government. A workshop on in situ resource utilization (ISRU) was held in conjunction with the conference with the goal of evaluating and prioritizing processing issues in Lunar and Martian type environments. The workshop participation included invited speakers and investigators currently funded in the material science program under the Human Exploration and Development of Space (HEDS) initiative. The conference featured a plenary session every day with an invited speaker that was followed by three parallel breakout sessions in subdisciplines. Attendance was close

  12. Microgravity Materials Science Conference 2000. Volume 2

    Science.gov (United States)

    Ramachandran, Narayanan (Editor); Bennett, Nancy (Editor); McCauley, Dannah (Editor); Murphy, Karen (Editor); Poindexter, Samantha (Editor)

    2001-01-01

    This is Volume 2 of 3 of the 2000 Microgravity Materials Science Conference that was held June 6-8 at the Von Braun Center, Huntsville, Alabama. It was organized by the Microgravity Materials Science Discipline Working Group, sponsored by the Microgravity Research Division (MRD) at NASA Headquarters, and hosted by NASA Marshall Space Flight Center and the Alliance for Microgravity Materials Science and Applications (AMMSA). It was the fourth NASA conference of this type in the Microgravity materials science discipline. The microgravity science program sponsored approx. 200 investigators, all of whom made oral or poster presentations at this conference- In addition, posters and exhibits covering NASA microgravity facilities, advanced technology development projects sponsored by the NASA Microgravity Research Division at NASA Headquarters, and commercial interests were exhibited. The purpose of the conference %%,its to inform the materials science community of research opportunities in reduced gravity and to highlight the Spring 2001 release of the NASA Research Announcement (NRA) to solicit proposals for future investigations. It also served to review the current research and activities in material,, science, to discuss the envisioned long-term goals. and to highlight new crosscutting research areas of particular interest to MRD. The conference was aimed at materials science researchers from academia, industry, and government. A workshop on in situ resource utilization (ISRU) was held in conjunction with the conference with the goal of evaluating and prioritizing processing issues in Lunar and Martian type environments. The workshop participation included invited speakers and investigators currently funded in the material science program under the Human Exploration and Development of Space (HEDS) initiative. The conference featured a plenary session every day with an invited speaker that was followed by three parallel breakout sessions in subdisciplines. Attendance

  13. Microgravity Materials Science Conference 2000. Volume 1

    Science.gov (United States)

    Ramachandran, Narayanan (Editor); Bennett, Nancy (Editor); McCauley, Dannah (Editor); Murphy, Karen (Editor); Poindexter, Samantha (Editor)

    2001-01-01

    This is Volume 1 of 3 of the 2000 Microgravity Material Science Conference that was held June 6-8 at the Von Braun Center, Huntsville, Alabama. It was organized by the Microgravity Materials Science Discipline Working Group, sponsored by the Microgravity Research Division (MRD) at NASA Headquarters, and hosted by NASA Marshall Space Flight Center and the Alliance for Microgravity Materials Science and Applications (AMMSA). It was the fourth NASA conference of this type in the microgravity materials science discipline. The microgravity science program sponsored approx. 200 investigators, all of whom made oral or poster presentations at this conference. In addition, posters and exhibits covering NASA microgravity facilities, advanced technology development projects sponsored by the NASA Microgravity Research Division at NASA Headquarters, and commercial interests were exhibited. The purpose of the conference was to inform the materials science community of research opportunities in reduced gravity and to highlight the Spring 2001 release of the NASA Research Announcement (NRA) to solicit proposals for future investigations. It also served to review the current research and activities in materials science, to discuss the envisioned long-term goals. and to highlight new crosscutting research areas of particular interest to MRD. The conference was aimed at materials science researchers from academia, industry, and government. A workshop on in situ resource utilization (ISRU) was held in conjunction with the conference with the goal of evaluating and prioritizing processing issues in Lunar and Martian type environments. The workshop participation included invited speakers and investigators currently funded in the material science program under the Human Exploration and Development of Space (HEDS) initiative. The conference featured a plenary session every day with an invited speaker that was followed by three parallel breakout sessions in subdisciplines. Attendance was

  14. Mendel conference

    CERN Document Server

    2015-01-01

    This book is a collection of selected accepted papers of Mendel conference that has been held in Brno, Czech Republic in June 2015. The book contents three chapters which represent recent advances in soft computing including intelligent image processing and bio-inspired robotics.: Chapter 1: Evolutionary Computing, and Swarm intelligence, Chapter 2: Neural Networks, Self-organization, and Machine Learning, and Chapter3: Intelligent Image Processing, and Bio-inspired Robotics. The Mendel conference was established in 1995, and it carries the name of the scientist and Augustinian priest Gregor J. Mendel who discovered the famous Laws of Heredity. In 2015 we are commemorating 150 years since Mendel's lectures, which he presented in Brno on February and March 1865. The main aim of the conference was to create a periodical possibility for students, academics and researchers to exchange their ideas and novel research methods.  .

  15. The Conference ""Theoretical and Practical Aspects of Public Finance"" 2001

    OpenAIRE

    David Trytko

    2001-01-01

    The Public Finance department of the University of Economic in Prague (VŠE Praha) sponsored an international conference on ""The Theoretical and Practical Aspects of Public Finance. The main topics included fiscal decentralisation, the efficiency of the public sector, EU enlargement, and the role of the EU structural funds.

  16. The 11th Beijing International Printing Information Conference was held

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    INFOPRINT 2008,The 11th Beijing International Printing Information Conference was held on November 28, 2008 in Beijing Friendship Hotel,which is sponsored by PEIAC,Printing and Printing Equipment Industries Association of China.Xu Jinfeng,vice chairman & secretary general of PEIAC and Tan Junqiao,advisor of PEIAC

  17. Fourth annual conference on materials for coal conversion and utilization

    Energy Technology Data Exchange (ETDEWEB)

    1979-01-01

    The fourth annual conference on materials for coal conversion and utilization was held October 9 to 11, 1979, at the National Bureau of Standards, Gaithersburg, Maryland. It was sponsored by the National Bureau of Standards, the Electric Power Research Institute, the US Department of Energy, and the Gas Research Institute. The papers have been entered individually into EDB and ERA. (LTN)

  18. The TianHe-1A Supercomputer: Its Hardware and Software

    Institute of Scientific and Technical Information of China (English)

    Xue-Jun Yang; Xiang-Ke Liao; Kai Lu; Qing-Feng Hu; Jun-Qiang Song; Jin-Shu Su

    2011-01-01

    This paper presents an overview of TianHe-1A (TH-1A) supercomputer, which is built by National University of Defense Technology of China (NUDT). TH-1A adopts a hybrid architecture by integrating CPUs and GPUs, and its interconnect network is a proprietary high-speed communication network. The theoretical peak performance of TH-1A is 4700TFlops, and its LINPACK test result is 2566TFlops. It was ranked the No. 1 on the TOP500 List released in November, 2010. TH-1A is now deployed in National Supercomputer Center in Tianjin and provides high performance computing services. TH-1A has played an important role in many applications, such as oil exploration, weather forecast, bio-medical research.

  19. Explaining the Gap between Theoretical Peak Performance and Real Performance for Supercomputer Architectures

    Directory of Open Access Journals (Sweden)

    W. Schönauer

    1994-01-01

    Full Text Available The basic architectures of vector and parallel computers and their properties are presented followed by a discussion of memory size and arithmetic operations in the context of memory bandwidth. For a single operation micromeasurements of the vector triad for the IBM 3090 VF and the CRAY Y-MP/8 are presented, revealing in detail the losses for this operation. The global performance of a whole supercomputer is then considered by identifying reduction factors that reduce the theoretical peak performance to the poor real performance. The responsibilities of the manufacturer and of the user for these losses are discussed. The price-performance ratio for different architectures as of January 1991 is briefly mentioned. Finally a user-friendly architecture for a supercomputer is proposed.

  20. HACC: Simulating Sky Surveys on State-of-the-Art Supercomputing Architectures

    CERN Document Server

    Habib, Salman; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukic, Zarija; Sehrish, Saba; Liao, Wei-keng

    2014-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of prog...

  1. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    Science.gov (United States)

    Cabrillo, I.; Cabellos, L.; Marco, J.; Fernandez, J.; Gonzalez, I.

    2014-06-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  2. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  3. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  4. Towards 21st century stellar models: Star clusters, supercomputing and asteroseismology

    Science.gov (United States)

    Campbell, S. W.; Constantino, T. N.; D'Orazi, V.; Meakin, C.; Stello, D.; Christensen-Dalsgaard, J.; Kuehn, C.; De Silva, G. M.; Arnett, W. D.; Lattanzio, J. C.; MacLean, B. T.

    2016-09-01

    Stellar models provide a vital basis for many aspects of astronomy and astrophysics. Recent advances in observational astronomy - through asteroseismology, precision photometry, high-resolution spectroscopy, and large-scale surveys - are placing stellar models under greater quantitative scrutiny than ever. The model limitations are being exposed and the next generation of stellar models is needed as soon as possible. The current uncertainties in the models propagate to the later phases of stellar evolution, hindering our understanding of stellar populations and chemical evolution. Here we give a brief overview of the evolution, importance, and substantial uncertainties of core helium burning stars in particular and then briefly discuss a range of methods, both theoretical and observational, that we are using to advance the modelling. This study uses observational data from from HST, VLT, AAT, Kepler, and supercomputing resources in Australia provided by the National Computational Infrastructure (NCI) and Pawsey Supercomputing Centre.

  5. Conference Notification

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Roskill Information Services and Metal Events Ltd areorganizing the 2nd International Rare Earths Conference,which will be held at the Conrad Hotel in Hong Kong onFebruary 28 to March 2 2006.The program is structured tocover all the main aspects of the rare earths industry,including development of Chinese rare earth industry; trendsin rare earths demand; potential constraints on supply;research on potential capacity of rare earths supply chain.Global rare earths consumers will attend the conference.Registra...

  6. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Esteban [University of Pittsburgh; Ni, Xiang [University of Illinois at Urbana-Champaign; Jones, Terry R [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  7. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  8. TSP:A Heterogeneous Multiprocessor Supercomputing System Based on i860XP

    Institute of Scientific and Technical Information of China (English)

    黄国勇; 李三立

    1994-01-01

    Numerous new RISC processors provide support for supercomputing.By using the “mini-Cray” i860 superscalar processor,an add-on board has been developed to boost the performance of a real time system.A parallel heterogeneous multiprocessor surercomputing system,TSP,is constructed.In this paper,we present the system design consideration and described the architecture of the TSP and its features.

  9. US Department of Energy High School Student Supercomputing Honors Program: A follow-up assessment

    Energy Technology Data Exchange (ETDEWEB)

    1987-01-01

    The US DOE High School Student Supercomputing Honors Program was designed to recognize high school students with superior skills in mathematics and computer science and to provide them with formal training and experience with advanced computer equipment. This document reports on the participants who attended the first such program, which was held at the National Magnetic Fusion Energy Computer Center at the Lawrence Livermore National Laboratory (LLNL) during August 1985.

  10. Application of Supercomputer Technologies for Simulation Of Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Vladimir Valentinovich Okrepilov

    2015-06-01

    Full Text Available To date, an extensive experience has been accumulated in investigation of problems related to quality, assessment of management systems, modeling of economic system sustainability. The performed studies have created a basis for development of a new research area — Economics of Quality. Its tools allow to use opportunities of model simulation for construction of the mathematical models adequately reflecting the role of quality in natural, technical, social regularities of functioning of the complex socio-economic systems. Extensive application and development of models, and also system modeling with use of supercomputer technologies, on our deep belief, will bring the conducted research of socio-economic systems to essentially new level. Moreover, the current scientific research makes a significant contribution to model simulation of multi-agent social systems and that is not less important, it belongs to the priority areas in development of science and technology in our country. This article is devoted to the questions of supercomputer technologies application in public sciences, first of all, — regarding technical realization of the large-scale agent-focused models (AFM. The essence of this tool is that owing to the power computer increase it has become possible to describe the behavior of many separate fragments of a difficult system, as socio-economic systems are. The article also deals with the experience of foreign scientists and practicians in launching the AFM on supercomputers, and also the example of AFM developed in CEMI RAS, stages and methods of effective calculating kernel display of multi-agent system on architecture of a modern supercomputer will be analyzed. The experiments on the basis of model simulation on forecasting the population of St. Petersburg according to three scenarios as one of the major factors influencing the development of socio-economic system and quality of life of the population are presented in the

  11. 75 FR 15401 - Information Collection; Online Registration for FSA-sponsored Events and Conferences

    Science.gov (United States)

    2010-03-29

    .... The information is used to collect payment from the respondents and make hotel reservations and other... burden including the validity of the methodology and assumptions used; (3) Enhance the quality, utility..., mechanical, or other technological collection techniques or other forms of information technology. All...

  12. 78 FR 16649 - Information Collection; Online Registration for FSA-sponsored Events and Conferences

    Science.gov (United States)

    2013-03-18

    ... to collect payment from the respondents and make hotel reservations and other special arrangements as... validity of the methodology and assumptions used; (3) Enhance the quality, utility, and clarity of the... technological collection techniques or other forms of information technology. All responses to this notice...

  13. PTC '83. Pacific Telecommunications Conference. Papers and Proceedings of a Conference (Honolulu, Hawaii, January 16-19, 1983).

    Science.gov (United States)

    Wedemeyer, Dan J., Ed.

    These 40 papers were selected on the basis of their contribution to building an effective knowledge base for professionals facing the Pacific telecommunications challenge. A foreword, a list of conference organizations and sponsors, and 13 session summaries precede the papers, which are organized generally by topic: (1) local and national…

  14. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  15. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  16. Conference Report: CAQD Conference 2013

    Directory of Open Access Journals (Sweden)

    Christina Silver

    2013-05-01

    Full Text Available Nestled on the banks of the river Lahn in central Germany, the 15th CAQD conference was held at Marburg. A beautiful provincial town, it is one of very few that was spared the bombings of WWII; now providing the perfect backdrop for meeting to discuss developments in qualitative technology. This was the second international conference in the series with more than 140 delegates from 14 countries, including: Canada, Brazil, Portugal, the UK, as well as Germany. Hosted by MAGMA, the Marburg Research Group for Methodology and Evaluation, in partnership with Philipps-University Marburg, CAQD prioritizes a user-focus which balances practical and methodological workshops with conference presentations. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs1302249

  17. Conduct of the International Multigrid Conference

    Science.gov (United States)

    Mccormick, S.

    1984-01-01

    The 1983 International Multigrid Conference was held at Colorado's Copper Mountain Ski Resort, April 5-8. It was organized jointly by the Institute for Computational Studies at Colorado State University, U.S.A., and the Gasellschaft fur Mathematik und Datenverarbeitung Bonn, F.R. Germany, and was sponsored by the Air Force Office of Sponsored Research and National Aeronautics and Space Administration Headquarters. The conference was attended by 80 scientists, divided by institution almost equally into private industry, research laboratories, and academia. Fifteen attendees came from countries other than the U.S.A. In addition to the fruitful discussions, the most significant factor of the conference was of course the lectures. The lecturers include most of the leaders in the field of multigrid research. The program offered a nice integrated blend of theory, numerical studies, basic research, and applications. Some of the new areas of research that have surfaced since the Koln-Porz conference include: the algebraic multigrid approach; multigrid treatment of Euler equations for inviscid fluid flow problems; 3-D problems; and the application of MG methods on vector and parallel computers.

  18. Conference Hopes

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Annual conference outlines tasks for 2010 to solidify China’s economic recovery through rational investment and increasing consumptionc hina will adhere to a consistent and stable economic strategy, putting in place a proactive fiscal policy and an accommodative monetary policy for the 2010 fiscal year-the macro-economic course mapped out during China’s Central

  19. Conference proceedings

    African Journals Online (AJOL)

    abp

    2015-08-07

    Aug 7, 2015 ... African epidemiological association and 1st conference of the Cameroon society of ... International Reference Centre (CIRCB) for research on HIV/AIDS prevention and .... interests (third line regimens, clinical trials and HIV functional cure). ... sharing. Regarding Mycobacterium Tuberculosis, the efficacy of.

  20. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk (Kitware, Inc., Clifton Park, NY); Fabian, Nathan; Marion, Patrick (Kitware, Inc., Clifton Park, NY); Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  1. Mini-conference and Related Sessions on Laboratory Plasma Astrophysics

    Energy Technology Data Exchange (ETDEWEB)

    Hantao Ji

    2004-02-27

    This paper provides a summary of some major physics issues and future perspectives discussed in the Mini-Conference on Laboratory Plasma Astrophysics. This Mini-conference, sponsored by the Topical Group on Plasma Astrophysics, was held as part of the American Physical Society's Division of Plasma Physics 2003 Annual Meeting (October 27-31, 2003). Also included are brief summaries of selected talks on the same topic presented at two invited paper sessions (including a tutorial) and two contributed focus oral sessions, which were organized in coordination with the Mini-Conference by the same organizers.

  2. Conference Proceedings: “Down Syndrome: National Conference on Patient Registries, Research Databases, and Biobanks”

    OpenAIRE

    Oster-Granite, Mary Lou; Parisi, Melissa A.; Abbeduto, Leonard; Berlin, Dorit S.; Bodine, Cathy; Bynum, Dana; Capone, George; Collier, Elaine; Hall, Dan; Kaeser, Lisa; Kaufmann, Petra; Krischer, Jeffrey; Livingston, Michelle; McCabe, Linda L.; Pace, Jill

    2011-01-01

    A December 2010 meeting, “Down Syndrome: National Conference on Patient Registries, Research Databases, and Biobanks,” was jointly sponsored by the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) at the National Institutes of Health (NIH) in Bethesda, MD, and the Global Down Syndrome Foundation (GDSF)/Linda Crnic Institute for Down Syndrome based in Denver, CO. Approximately 70 attendees and organizers from various advocacy groups, federal agencies (Cen...

  3. [Criminal implication of sponsoring in medicine: legal ramifactions and recommendations].

    Science.gov (United States)

    Mahnken, A H; Theilmann, M; Bolenz, M; Günther, R W

    2005-08-01

    As a consequence of the so-called "Heart-Valve-Affair" in 1994, the German public became aware of the potential criminal significance of industrial sponsoring and third-party financial support in medicine. Since 1997, when the German Anti-Corruption Law came into effect, the penal regulations regarding bribery and benefits for public officers were tightened. Due to the lack of explicit and generally accepted guidelines in combination with regional differences of jurisdiction, there is a lingering uncertainty regarding the criminal aspects of third-party funding and industrial sponsoring. The aim of this review is to summarize the penal and professional implications of third-party funding and sponsoring in medicine including recent aspects of jurisdiction. The currently available recommendations on this issue are introduced.

  4. Math/science education action conference report

    Energy Technology Data Exchange (ETDEWEB)

    1990-05-01

    On October 8--10, 1989, the US Department of Energy, the Lawrence Hall of Science, and the Lawrence Berkeley Laboratory sponsored a Math/Science Education Action Conference in Berkeley, California. The conference was co-chaired by Admiral James D. Watkins, Secretary of Energy, and Dr. Glenn T. Seaborg Chairman of the Lawrence Hall of Science. Nearly 250 scientists, educators, business executives, and government leaders came together to develop a concrete plan of action for restructuring and revitalizing mathematics and science education. Their target was to improve education for an entire cohort of children--the Class of 2007, the children born this school year--and their governing principle was one of collaboration, both between Federal agencies, and between public and private sectors. The report of the conference co-chairmen and participants is provided in this document. 41 figs.

  5. Unfulfilled translation opportunities in industry sponsored clinical trials

    DEFF Research Database (Denmark)

    Smed, Marie; Getz, Kenneth A.

    2013-01-01

    in the industry and site representatives are changing. The process of clinical trials has increased in complexity over the years, resulting in additional management layers. Besides an increase in internal management layers, sponsors often also outsource various tasks related to clinical trials to a CRO (Contract...... knowledge gained by physicians in the process of clinical trials. These restrictions to knowledge-transfer between site and sponsor are further challenged if CRO partners are integrated in the trial process. © 2013 Elsevier Inc. All rights reserved....

  6. Real Time Conference 2016 Overview

    Science.gov (United States)

    Luchetta, Adriano

    2017-06-01

    This is a special issue of the IEEE Transactions on Nuclear Science containing papers from the invited, oral, and poster presentation of the 20th Real Time Conference (RT2016). The conference was held June 6-10, 2016, at Centro Congressi Padova “A. Luciani,” Padova, Italy, and was organized by Consorzio RFX (CNR, ENEA, INFN, Università di Padova, Acciaierie Venete SpA) and the Istituto Nazionale di Fisica Nucleare. The Real Time Conference is multidisciplinary and focuses on the latest developments in real-time techniques in high-energy physics, nuclear physics, astrophysics and astroparticle physics, nuclear fusion, medical physics, space instrumentation, nuclear power instrumentation, general radiation instrumentation, and real-time security and safety. Taking place every second year, it is sponsored by the Computer Application in Nuclear and Plasma Sciences technical committee of the IEEE Nuclear and Plasma Sciences Society. RT2016 attracted more than 240 registrants, with a large proportion of young researchers and engineers. It had an attendance of 67 students from many countries.

  7. SIGEF Conference

    CERN Document Server

    Terceño-Gómez, Antonio; Ferrer-Comalat, Joan; Merigó-Lindahl, José; Linares-Mustarós, Salvador

    2015-01-01

    This book is a collection of selected papers presented at the SIGEF conference, held at the Faculty of Economics and Business of the University of Girona (Spain), 06-08 July, 2015. This edition of the conference has been presented with the slogan “Scientific methods for the treatment of uncertainty in social sciences”. There are different ways for dealing with uncertainty in management. The book focuses on soft computing theories and their role in assessing uncertainty in a complex world. It gives a comprehensive overview of quantitative management topics and discusses some of the most recent developments in all the areas of business and management in soft computing including Decision Making, Expert Systems and Forgotten Effects Theory, Forecasting Models, Fuzzy Logic and Fuzzy Sets, Modelling and Simulation Techniques, Neural Networks and Genetic Algorithms and Optimization and Control. The book might be of great interest for anyone working in the area of management and business economics and might be es...

  8. Conference information

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    @@ Thermag Ⅳ- The 4th International Conference on Magnetic Refrigeration at Room Temperature of IIR Refrigeration technology is widely used today. However, traditional vapor compression/expansion refrigeration technology has some disadvantages, such as low conversion efficiency of vapor compressor, and emission of the ozonosphere depletion gas and greenhouse effect gas, etc. Magnetic refrigeration is a new cooling technology with huge potential application prospect, characterized by high efficiency, energy saving and environmental friendly.

  9. QAP co-sponsors global meeting on quality assurance in developing countries.

    Science.gov (United States)

    1994-01-01

    A consultative meeting on quality health care in developing countries was held in the Netherlands immediately before the 1993 conference of the International Society of Quality Assurance in Health Care. Sponsored by the USAID-funded Quality Assurance Project in collaboration with the World Health Organization and the Danish foreign aid agency, DANIDA, the meeting brought together representatives from 17 developing countries. Participants enthusiastically exchanged experiences in adapting and applying quality assurance methods to resource-strained health care systems and valued the recommendations they received. Technical discussions focused on strategic planning, standard setting and monitoring, problem solving, and quality assurance capacity building. The meeting included background papers on each theme, synopses of the work of representatives of selected countries, and small group sessions. The participants recognized that certain structures, such as a data and health information monitoring system, must be in place to sustain a quality assurance program. There are also key environmental factors, including a commitment in the form of resource allocation from top leadership. The highlights of the meeting were presented at the general conference to great acclaim. Participants in the meeting benefitted from the information generated by the exchange of ideas and became unified in their understanding that quality assurance is a viable and necessary component of health care management. The success of the meeting led to the proposal which is under consideration that a permanent committee be established to ensure the participation of representatives of developing countries in international quality assurance activities.

  10. 48 CFR 970.5235-1 - Federally funded research and development center sponsoring agreement.

    Science.gov (United States)

    2010-10-01

    ... and development center sponsoring agreement. 970.5235-1 Section 970.5235-1 Federal Acquisition... Federally funded research and development center sponsoring agreement. As prescribed in 970.3501-4, the... Sponsoring Agreement (DEC 2000) (a) Pursuant to 48 CFR 35.017-1, this contract constitutes the sponsoring...

  11. Sponsoring congregations' answer to McGrath Thesis: corporate control.

    Science.gov (United States)

    Maida, A J

    1980-04-01

    Fr. Maida refutes McGrath's Thesis and posits that Catholic health care facilities face special corporate, financial, and theological administrative issues. By maintaining corporate control of the institutions they sponsor, religious congregations can determine institutional policies consistent with church teachings.

  12. Stricter Employment Protection and Firms' Incentives to Sponsor Training

    DEFF Research Database (Denmark)

    Messe, Pierre-Jean; Rouland, Bénédicte

    2014-01-01

    This paper uses a difference-in-differences approach, combined with propensity score matching, to identify the effect of older workers employment protection on French firms' incentives to sponsor training. Between 1987 and 2008, French firms laying off workers aged over 50 had to pay a tax...

  13. School-Sponsored Health Insurance: Planning for a New Reality

    Science.gov (United States)

    Liang, Bryan A.

    2010-01-01

    Health care reform efforts in both the Clinton and Obama administrations have attempted to address college and university health. Yet, although the world of health care delivery has almost universally evolved to managed care, school health programs have not. In general, school-sponsored health plans do little to improve access and have adopted…

  14. 7 CFR 225.15 - Management responsibilities of sponsors.

    Science.gov (United States)

    2010-01-01

    ...; meal pattern requirements; and the duties of a monitor. Each sponsor shall ensure that its..., they may be used to verify the current food stamp, FDPIR, or TANF certification for the children for... reduced price meal eligibility information to certain programs and individuals without parental consent...

  15. Consumer Perceptions of Sponsors of Disease Awareness Advertising

    Science.gov (United States)

    Hall, Danika V.; Jones, Sandra C.; Iverson, Donald C.

    2011-01-01

    Purpose: In many countries there is emerging concern regarding alliances between the pharmaceutical industry and health non-profit organizations (NPOs), and the increase of co-sponsored marketing activities such as disease awareness advertising. The current study aims to explore Australian women's perceptions of disease awareness advertising with…

  16. 7 CFR 225.14 - Requirements for sponsor participation.

    Science.gov (United States)

    2010-01-01

    ... 225.14 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS SUMMER FOOD SERVICE PROGRAM Sponsor and Site Provisions... Youth Sports Program; and (5) Private nonprofit organizations as defined in § 225.2. (c) General...

  17. Consumer Perceptions of Sponsors of Disease Awareness Advertising

    Science.gov (United States)

    Hall, Danika V.; Jones, Sandra C.; Iverson, Donald C.

    2011-01-01

    Purpose: In many countries there is emerging concern regarding alliances between the pharmaceutical industry and health non-profit organizations (NPOs), and the increase of co-sponsored marketing activities such as disease awareness advertising. The current study aims to explore Australian women's perceptions of disease awareness advertising with…

  18. Is it beneficial to have an alcoholics anonymous sponsor?

    Science.gov (United States)

    Tonigan, J Scott; Rice, Samara L

    2010-09-01

    Alcoholics Anonymous (AA) attendance is predictive of increased abstinence for many problem drinkers and treatment referral to AA is common. Strong encouragement to acquire an AA sponsor is likewise typical, and findings about the benefits associated with social support for abstinence in AA support this practice, at least indirectly. Despite this widespread practice, however, prospective tests of the unique contribution of having an AA sponsor are lacking. This prospective study investigated the contribution of acquiring an AA sponsor using a methodologically rigorous design that isolated the specific effects of AA sponsorship. Participants were recruited from AA and outpatient treatment. Intake and follow-up assessments included questionnaires, semi-structured interviews, and urine toxicology screens. Eligibility criteria limited prior treatment and AA histories to clarify the relationship of interest while, for generalizability purposes, broad substance abuse criteria were used. Of the 253 participants, 182 (72%) provided complete data on measures central to the aims of this study. Overall reductions in alcohol, marijuana, and cocaine use were found over 12-months and lagged analyses indicated that AA attendance significantly predicted increased abstinence. During early AA affiliation but not later logistic regressions showed that having an AA sponsor predicted increased alcohol-abstinence and abstinence from marijuana and cocaine after first controlling for a host of AA-related, treatment, and motivational measures that are associated with AA exposure or are generally prognostic of outcome.

  19. 17 CFR 229.1104 - (Item 1104) Sponsors.

    Science.gov (United States)

    2010-04-01

    ... AND CONSERVATION ACT OF 1975-REGULATION S-K Asset-Backed Securities (Regulation AB) § 229.1104 (Item... 17 Commodity and Securities Exchanges 2 2010-04-01 2010-04-01 false (Item 1104) Sponsors. 229.1104 Section 229.1104 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION STANDARD...

  20. 76 FR 2807 - New Animal Drugs; Change of Sponsor

    Science.gov (United States)

    2011-01-18

    ... from Biopure Corp. to OPK Biotech, LLC. DATES: This rule is effective January 18, 2011. FOR FURTHER... Biotech, LLC, 11 and 39 Hurley St., Cambridge, MA 02141. There is no change in drug labeler code... addition, OPK Biotech, LLC, is not currently listed in the animal drug regulations as a sponsor of...

  1. Unfulfilled translation opportunities in industry sponsored clinical trials.

    Science.gov (United States)

    Smed, Marie; Getz, Kenneth A

    2013-05-01

    Knowledge generated by site representatives through their participation in clinical trials is valuable for testing new products in use and obtaining final market approval. The leverage of this important knowledge is however challenged as the former direct relationships between in-house staff in the industry and site representatives are changing. The process of clinical trials has increased in complexity over the years, resulting in additional management layers. Besides an increase in internal management layers, sponsors often also outsource various tasks related to clinical trials to a CRO (Contract Research Organization) and thereby adding another link in the relationships between site and sponsor. These changes are intended to optimize the time-consuming and costly trial phases; however, there is a need to study whether valuable knowledge and experience is compromised in the process. Limited research exists on the full range of clinical practice insights obtained by investigators during and after clinical trials and how well these insights are transferred to study sponsors. This study explores the important knowledge-transfer processes between sites and sponsors and to what extent sites' knowledge gained in clinical trials is utilized by the industry. Responses from 451 global investigative site representatives are included in the study. The analysis of the extensive dataset reveals that the current processes of collaboration between sites and the industry restrict the leverage of valuable knowledge gained by physicians in the process of clinical trials. These restrictions to knowledge-transfer between site and sponsor are further challenged if CRO partners are integrated in the trial process.

  2. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    Science.gov (United States)

    Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

  3. Integration of PanDA workload management system with Titan supercomputer at OLCF

    Science.gov (United States)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  4. Development of the general interpolants method for the CYBER 200 series of supercomputers

    Science.gov (United States)

    Stalnaker, J. F.; Robinson, M. A.; Spradley, L. W.; Kurzius, S. C.; Thoenes, J.

    1988-01-01

    The General Interpolants Method (GIM) is a 3-D, time-dependent, hybrid procedure for generating numerical analogs of the conservation laws. This study is directed toward the development and application of the GIM computer code for fluid dynamic research applications as implemented for the Cyber 200 series of supercomputers. An elliptic and quasi-parabolic version of the GIM code are discussed. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and an implicit finite difference scheme are also included.

  5. Scalable parallel programming for high performance seismic simulation on petascale heterogeneous supercomputers

    Science.gov (United States)

    Zhou, Jun

    The 1994 Northridge earthquake in Los Angeles, California, killed 57 people, injured over 8,700 and caused an estimated $20 billion in damage. Petascale simulations are needed in California and elsewhere to provide society with a better understanding of the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures. As the heterogeneous supercomputing infrastructures are becoming more common, numerical developments in earthquake system research are particularly challenged by the dependence on the accelerator elements to enable "the Big One" simulations with higher frequency and finer resolution. Reducing time to solution and power consumption are two primary focus area today for the enabling technology of fault rupture dynamics and seismic wave propagation in realistic 3D models of the crust's heterogeneous structure. This dissertation presents scalable parallel programming techniques for high performance seismic simulation running on petascale heterogeneous supercomputers. A real world earthquake simulation code, AWP-ODC, one of the most advanced earthquake codes to date, was chosen as the base code in this research, and the testbed is based on Titan at Oak Ridge National Laboraratory, the world's largest hetergeneous supercomputer. The research work is primarily related to architecture study, computation performance tuning and software system scalability. An earthquake simulation workflow has also been developed to support the efficient production sets of simulations. The highlights of the technical development are an aggressive performance optimization focusing on data locality and a notable data communication model that hides the data communication latency. This development results in the optimal computation efficiency and throughput for the 13-point stencil code on heterogeneous systems, which can be extended to general high-order stencil codes. Started from scratch, the hybrid CPU/GPU version of AWP

  6. Accelerating Virtual High-Throughput Ligand Docking: current technology and case study on a petascale supercomputer.

    Science.gov (United States)

    Ellingson, Sally R; Dakshanamurthy, Sivanesan; Brown, Milton; Smith, Jeremy C; Baudry, Jerome

    2014-04-25

    In this paper we give the current state of high-throughput virtual screening. We describe a case study of using a task-parallel MPI (Message Passing Interface) version of Autodock4 [1], [2] to run a virtual high-throughput screen of one-million compounds on the Jaguar Cray XK6 Supercomputer at Oak Ridge National Laboratory. We include a description of scripts developed to increase the efficiency of the predocking file preparation and postdocking analysis. A detailed tutorial, scripts, and source code for this MPI version of Autodock4 are available online at http://www.bio.utk.edu/baudrylab/autodockmpi.htm.

  7. A New Hydrodynamic Model for Numerical Simulation of Interacting Galaxies on Intel Xeon Phi Supercomputers

    Science.gov (United States)

    Kulikov, Igor; Chernykh, Igor; Tutukov, Alexander

    2016-05-01

    This paper presents a new hydrodynamic model of interacting galaxies based on the joint solution of multicomponent hydrodynamic equations, first moments of the collisionless Boltzmann equation and the Poisson equation for gravity. Using this model, it is possible to formulate a unified numerical method for solving hyperbolic equations. This numerical method has been implemented for hybrid supercomputers with Intel Xeon Phi accelerators. The collision of spiral and disk galaxies considering the star formation process, supernova feedback and molecular hydrogen formation is shown as a simulation result.

  8. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  9. Scheduling Supercomputers.

    Science.gov (United States)

    1983-02-01

    no task is scheduled with overlap. Let numpi be the total number of preemptions and idle slots of size at most to that are introduced. We see that if...no usable block remains on Qm-*, then numpi < m-k. Otherwise, numpi ! m-k-1. If j>n when this procedure terminates, then all tasks have been scheduled

  10. Grassroots Supercomputing

    CERN Multimedia

    Buchanan, Mark

    2005-01-01

    What started out as a way for SETI to plow through its piles or radio-signal data from deep space has turned into a powerful research tool as computer users acrosse the globe donate their screen-saver time to projects as diverse as climate-change prediction, gravitational-wave searches, and protein folding (4 pages)

  11. EGC Conferences

    CERN Document Server

    Ritschard, Gilbert; Pinaud, Bruno; Venturini, Gilles; Zighed, Djamel; Advances in Knowledge Discovery and Management

    This book is a collection of representative and novel works done in Data Mining, Knowledge Discovery, Clustering and Classification that were originally presented in French at the EGC'2012 Conference held in Bordeaux, France, on January 2012. This conference was the 12th edition of this event, which takes place each year and which is now successful and well-known in the French-speaking community. This community was structured in 2003 by the foundation of the French-speaking EGC society (EGC in French stands for ``Extraction et Gestion des Connaissances'' and means ``Knowledge Discovery and Management'', or KDM). This book is intended to be read by all researchers interested in these fields, including PhD or MSc students, and researchers from public or private laboratories. It concerns both theoretical and practical aspects of KDM. The book is structured in two parts called ``Knowledge Discovery and Data Mining'' and ``Classification and Feature Extraction or Selection''. The first part (6 chapters) deals with...

  12. Quality and completeness of data documentation in an investigator-initiated trial versus an industry-sponsored trial.

    Science.gov (United States)

    Patwardhan, Soumil; Gogtay, Nithya; Thatte, Urmila; Pramesh, C S

    2014-01-01

    Literature on the quality and completeness of data and documentation in investigator-initiated research studies is scarce. We carried out a study to compare the quality of data and documentation in an investigator-initiated trial (IIT) with those in an industry-sponsored study. We retrospectively studied the archived data pertaining to 42 patients enrolled in two trials, 14 patients in an industry-sponsored study and 28 randomly selected patients from an IIT. Trial-related documents were examined and scored for the completeness of the acquisition of data and for storage as per a pre-formulated checklist. Weighted scores were given for each point on the checklist proportional to its relative importance in the data documentation process. A global score and sub-scores for specific modules were given for each subject. The scores in the two studies were compared using the Mann Whitney U test. The total score for general documents was similar in the IIT (14/14, 100%) and the sponsored study (24/25, 96%). The mean summary global score obtained for study-specific documents (maximum possible score, 32) in the IIT (27.1; 95% CI 26.4-27.8) was also not significantly different from that in the sponsored study (27.9; 95% CI 26.7-29.1; p=0.1291). Thus, investigator-initiated studies carried out by independent researchers in high-volume academic centres, even without active data monitoring and formal audits, appear to adhere to the high standards laid out in the International Conference on Harmonisation-Good Clinical Practices guidelines, ensuring accuracy and completeness in data documentation and archival.

  13. On a State-Sponsored Sport System in China

    Science.gov (United States)

    CAO, JIE; ZHIWEI, PAN

    2008-01-01

    The gold medal success of China in recent Olympic Games can be traced to the advancement of the state-sponsored sport system (SSSS). While the program was developed initially through socialist ideals, it is more than a centralized government system to monopolize resources for glorified sport performance. Participation in competition is an inherent part of the human condition. Success in athletics is associated with national identity and has economic, social, and cultural implications. Because of this, it is essential that the SSSS adjust and improve to keep pace with other facets of China’s quickly changing national reform. In association with emerging economic reform, some sports now receive equal or more funds from private investments compared to government allocation. The state-sponsored sport system must continue to adapt to maintain the Chinese tradition of excellence in competition. PMID:27182291

  14. Feynman diagrams sampling for quantum field theories on the QPACE 2 supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Rappl, Florian

    2016-08-01

    This work discusses the application of Feynman diagram sampling in quantum field theories. The method uses a computer simulation to sample the diagrammatic space obtained in a series expansion. For running large physical simulations powerful computers are obligatory, effectively splitting the thesis in two parts. The first part deals with the method of Feynman diagram sampling. Here the theoretical background of the method itself is discussed. Additionally, important statistical concepts and the theory of the strong force, quantum chromodynamics, are introduced. This sets the context of the simulations. We create and evaluate a variety of models to estimate the applicability of diagrammatic methods. The method is then applied to sample the perturbative expansion of the vertex correction. In the end we obtain the value for the anomalous magnetic moment of the electron. The second part looks at the QPACE 2 supercomputer. This includes a short introduction to supercomputers in general, as well as a closer look at the architecture and the cooling system of QPACE 2. Guiding benchmarks of the InfiniBand network are presented. At the core of this part, a collection of best practices and useful programming concepts are outlined, which enables the development of efficient, yet easily portable, applications for the QPACE 2 system.

  15. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration; Klimentov, Alexei; Oleynik, Danila; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently uses more than 100,000 cores at well over 100 Grid sites with a peak performance of 0.3 petaFLOPS, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real tim...

  16. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    De, Kaushik; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Vaniachine, Alexandre; Wenaus, Torre; Schovancova, Jaroslava

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real time, information about unused...

  17. Graph visualization for the analysis of the structure and dynamics of extreme-scale supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Berkbigler, K. P. (Kathryn P.); Bush, B. W. (Brian W.); Davis, Kei,; Hoisie, A. (Adolfy); Smith, S. A. (Steve A.)

    2002-01-01

    We are exploring the development and application of information visualization techniques for the analysis of new extreme-scale supercomputer architectures. Modern supercomputers typically comprise very large clusters of commodity SMPs interconnected by possibly dense and often nonstandard networks. The scale, complexity, and inherent nonlocality of the structure and dynamics of this hardware, and the systems and applications distributed over it, challenge traditional analysis methods. As part of the a la carte team at Los Alamos National Laboratory, who are simulating these advanced architectures, we are exploring advanced visualization techniques and creating tools to provide intuitive exploration, discovery, and analysis of these simulations. This work complements existing and emerging algorithmic analysis tools. Here we gives background on the problem domain, a description of a prototypical computer architecture of interest (on the order of 10,000 processors connected by a quaternary fat-tree network), and presentations of several visualizations of the simulation data that make clear the flow of data in the interconnection network.

  18. Groundwater cooling of a supercomputer in Perth, Western Australia: hydrogeological simulations and thermal sustainability

    Science.gov (United States)

    Sheldon, Heather A.; Schaubs, Peter M.; Rachakonda, Praveen K.; Trefry, Michael G.; Reid, Lynn B.; Lester, Daniel R.; Metcalfe, Guy; Poulet, Thomas; Regenauer-Lieb, Klaus

    2015-12-01

    Groundwater cooling (GWC) is a sustainable alternative to conventional cooling technologies for supercomputers. A GWC system has been implemented for the Pawsey Supercomputing Centre in Perth, Western Australia. Groundwater is extracted from the Mullaloo Aquifer at 20.8 °C and passes through a heat exchanger before returning to the same aquifer. Hydrogeological simulations of the GWC system were used to assess its performance and sustainability. Simulations were run with cooling capacities of 0.5 or 2.5 Mega Watts thermal (MWth), with scenarios representing various combinations of pumping rate, injection temperature and hydrogeological parameter values. The simulated system generates a thermal plume in the Mullaloo Aquifer and overlying Superficial Aquifer. Thermal breakthrough (transfer of heat from injection to production wells) occurred in 2.7-4.3 years for a 2.5 MWth system. Shielding (reinjection of cool groundwater between the injection and production wells) resulted in earlier thermal breakthrough but reduced the rate of temperature increase after breakthrough, such that shielding was beneficial after approximately 5 years pumping. Increasing injection temperature was preferable to increasing flow rate for maintaining cooling capacity after thermal breakthrough. Thermal impacts on existing wells were small, with up to 10 wells experiencing a temperature increase ≥ 0.1 °C (largest increase 6 °C).

  19. OpenMC:Towards Simplifying Programming for TianHe Supercomputers

    Institute of Scientific and Technical Information of China (English)

    廖湘科; 杨灿群; 唐滔; 易会战; 王锋; 吴强; 薛京灵

    2014-01-01

    Modern petascale and future exascale systems are massively heterogeneous architectures. Developing produc-tive intra-node programming models is crucial toward addressing their programming challenge. We introduce a directive-based intra-node programming model, OpenMC, and show that this new model can achieve ease of programming, high performance, and the degree of portability desired for heterogeneous nodes, especially those in TianHe supercomputers. While existing models are geared towards offloading computations to accelerators (typically one), OpenMC aims to more uniformly and adequately exploit the potential offered by multiple CPUs and accelerators in a compute node. OpenMC achieves this by providing a unified abstraction of hardware resources as workers and facilitating the exploitation of asyn-chronous task parallelism on the workers. We present an overview of OpenMC, a prototyping implementation, and results from some initial comparisons with OpenMP and hand-written code in developing six applications on two types of nodes from TianHe supercomputers.

  20. Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; van Rosendale, John; Southard, Dale; Gaither, Kelly; Childs, Hank; Brugger, Eric; Ahern, Sean

    2010-12-01

    Supercomputing Centers (SC's) are unique resources that aim to enable scientific knowledge discovery through the use of large computational resources, the Big Iron. Design, acquisition, installation, and management of the Big Iron are activities that are carefully planned and monitored. Since these Big Iron systems produce a tsunami of data, it is natural to co-locate visualization and analysis infrastructure as part of the same facility. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys does not receive the same level of treatment as that of the Big Iron. The main focus of this article is to explore different aspects of planning, designing, fielding, and maintaining the visualization and analysis infrastructure at supercomputing centers. Some of the questions we explore in this article include:"How should the Little Iron be sized to adequately support visualization and analysis of data coming off the Big Iron?" What sort of capabilities does it need to have?" Related questions concern the size of visualization support staff:"How big should a visualization program be (number of persons) and what should the staff do?" and"How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?"

  1. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  2. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, Clyde [Northern Illinois Univ., DeKalb, IL (United States); Karonis, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Lurio, Laurence [Northern Illinois Univ., DeKalb, IL (United States); Piot, Philippe [Northern Illinois Univ., DeKalb, IL (United States); Xiao, Zhili [Northern Illinois Univ., DeKalb, IL (United States); Glatz, Andreas [Northern Illinois Univ., DeKalb, IL (United States); Pohlman, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Hou, Minmei [Northern Illinois Univ., DeKalb, IL (United States); Demir, Veysel [Northern Illinois Univ., DeKalb, IL (United States); Song, Jie [Northern Illinois Univ., DeKalb, IL (United States); Duffin, Kirk [Northern Illinois Univ., DeKalb, IL (United States); Johns, Mitrick [Northern Illinois Univ., DeKalb, IL (United States); Sims, Thomas [Northern Illinois Univ., DeKalb, IL (United States); Yin, Yanbin [Northern Illinois Univ., DeKalb, IL (United States)

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  3. Supercomputer Assisted Generation of Machine Learning Agents for the Calibration of Building Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [ORNL; New, Joshua Ryan [ORNL; Edwards, Richard [ORNL

    2013-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrot pur- poses. EnergyPlus is the agship Department of Energy software that performs BEM for dierent types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manu- ally by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building en- ergy modeling unfeasible for smaller projects. In this paper, we describe the \\Autotune" research which employs machine learning algorithms to generate agents for the dierent kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of En- ergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-eective cali- bration of building models.

  4. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  5. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; D' Azevedo, Eduardo [ORNL; Philip, Bobby [ORNL; Worley, Patrick H [ORNL

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  6. Budget constraints and optimization in sponsored search auctions

    CERN Document Server

    Yang, Yanwu

    2013-01-01

    The Intelligent Systems Series publishes reference works and handbooks in three core sub-topic areas: Intelligent Automation, Intelligent Transportation Systems, and Intelligent Computing. They include theoretical studies, design methods, and real-world implementations and applications. The series' readership is broad, but focuses on engineering, electronics, and computer science. Budget constraints and optimization in sponsored search auctions takes into account consideration of the entire life cycle of campaigns for researchers and developers working on search systems and ROI maximization

  7. Using Neural Networks for Click Prediction of Sponsored Search

    OpenAIRE

    Baqapuri, Afroze Ibrahim; Trofimov, Ilya

    2014-01-01

    Sponsored search is a multi-billion dollar industry and makes up a major source of revenue for search engines (SE). click-through-rate (CTR) estimation plays a crucial role for ads selection, and greatly affects the SE revenue, advertiser traffic and user experience. We propose a novel architecture for solving CTR prediction problem by combining artificial neural networks (ANN) with decision trees. First we compare ANN with respect to other popular machine learning models being used for this ...

  8. Biopharmaceutical industry-sponsored global clinical trials in emerging countries

    OpenAIRE

    Lenio Souza Alvarenga; Elisabeth Nogueira Martins

    2010-01-01

    OBJECTIVE: To evaluate biopharmaceutical industry-sponsored clinical trials placed in countries previously described as emerging regions for clinical research, and potential differences for those placed in Brazil. METHODS: Data regarding recruitment of subjects for clinical trials were retrieved from www.clinicaltrials.gov on February 2nd 2009. Proportions of sites in each country were compared among emerging countries. Multiple logistic regressions were performed to evaluate whether trial pl...

  9. Is Mail Service Pharmacy Cost Beneficial to Plan Sponsors?

    Directory of Open Access Journals (Sweden)

    Larisa Vulakh, Student Pharmacist

    2011-01-01

    Full Text Available The objective of this study was to describe and compare prescription drug costs charged to a plan sponsor for the top 50 maintenance medications provided through retail and mail service procurement channels. Data were obtained for covered beneficiaries of a health plan sponsored by an employer with just over 3,000 covered employees The analytics team at the PBM administering the plan sponsor’s prescription drug benefit provided de-identified claims information for the top 50 maintenance prescription drugs delivered through either mail service or retail procurement methods for this employer over a one year period (7/1/2008 to 6/30/2009. Based on these data, (1 dollar amount difference (mail service minus retail, and (2 percentage difference between mail and retail costs (as a percentage of the lower net cost per day were computed. The findings revealed that 76 percent of the medication products studied were associated with a lower net cost per day to the plan sponsor through mail service procurement and 24 percent were associated with lower net cost through retail procurement.

  10. Biopharmaceutical industry-sponsored global clinical trials in emerging countries.

    Science.gov (United States)

    Alvarenga, Lenio Souza; Martins, Elisabeth Nogueira

    2010-01-01

    To evaluate biopharmaceutical industry-sponsored clinical trials placed in countries previously described as emerging regions for clinical research, and potential differences for those placed in Brazil. Data regarding recruitment of subjects for clinical trials were retrieved from www.clinicaltrials.gov on February 2nd 2009. Proportions of sites in each country were compared among emerging countries. Multiple logistic regressions were performed to evaluate whether trial placement in Brazil could be predicted by trial location in other countries and/or by trial features. A total of 8,501 trials were then active and 1,170 (13.8%) included sites in emerging countries (i.e., Argentina, Brazil, China, Czech Republic, Hungary, India, Mexico, Poland, Russia, South Korea, and South Africa). South Korea and China presented a significantly higher proportion of sites when compared to other countries (plogistic regressions detected no negative correlation between placement in other countries when compared to Brazil. Trials involving subjects with less than 15 years of age, those with targeted recruitment of at least 1,000 subjects, and seven sponsors were identified as significant predictors of trial placement in Brazil. No clear direct competition between Brazil and other emerging countries was detected. South Korea showed the higher proportion of sites and ranked third in total number of trials, appearing as a major player in attractiveness for biopharmaceutical industry-sponsored clinical trials.

  11. Population conference: consensus and conflict.

    Science.gov (United States)

    Willson, P D

    1984-01-01

    The United Nations-sponsored International Conference on Population held in Mexico City was both a rejection and an affirmation of a new policy of the Reagan administration. The policy denies international family planning funds to nongovernmental organizations that perform or actively promote abortion as a family planning method in other nations. A compromise statement was accepted urging governments to take appropriate measures to discourage abortion as a family planning method and when possible to provide for the humane treatment and counseling of women ho resorted to abortion. The statement on abortion was 1 of 88 reccomendations approved by the conference. The commitment expressed in the 10-year-old World Population Plan of Action to the rights and responsiblity to all people as reaffirmed. The conference also endorsed family life education and sex education as well as suitable family planning, information and services for adolescents, with due consideration given to the role, rights and obligations of parents. Increased support for international population and family planning programs was urged and World Bank President, Clausen, urged a 4-fold increase in international funding by the year 2000. Most of the conference's recommendations re devoted to the broad range of population policy issues, including morbidity and mortality, international and internal migration, the relationship between population and economic development and the status of women. The purpose of the recommendations is to increase the momentum of international support. The Mexico City conference was characterized by a remarkable degree of consensus about population policies with respect to integration with economic development, the need to respect individual rights and the recognition that all nations have sovereign rights to develop and implement their own population policies. Conflict and controversy arose in the areas of the arms race and the Middle East. The US position on abortion funding

  12. MUSME Conference

    CERN Document Server

    Martinez, Eusebio

    2015-01-01

    This volume contains the Proceedings of MUSME 2014, held at Huatulco in Oaxaca, Mexico, October 2014. Topics include analysis and synthesis of mechanisms; dynamics of multibody systems; design algorithms for mechatronic systems; simulation procedures and results; prototypes and their performance; robots and micromachines; experimental validations; theory of mechatronic simulation; mechatronic systems; and control of mechatronic systems. The MUSME symposium on Multibody Systems and Mechatronics was held under the auspices of IFToMM, the International Federation for Promotion of Mechanism and Machine Science, and FeIbIM, the Iberoamerican Federation of Mechanical Engineering. Since the first symposium in 2002, MUSME events have been characterised by the way they stimulate the integration between the various mechatronics and multibody systems dynamics disciplines, present a forum for facilitating contacts among researchers and students mainly in South American countries, and serve as a joint conference for the ...

  13. 4th International Symposium on Sensor Science (I3S2015: Conference Report

    Directory of Open Access Journals (Sweden)

    Peter Seitz

    2015-09-01

    Full Text Available An international scientific conference was sponsored by the journal Sensors under the patronage of the University of Basel. The 4th edition of the International Symposium on Sensor Science (I3S2015 ran from 13 to 15 July 2015 in Basel, Switzerland. It comprised five plenary sessions and one morning with three parallel sessions. The conference covered the most exciting aspects and the latest developments in sensor science. The conference dinner took place on the second evening of the conference. The I3S2015 brought together 170 participants from 40 different countries. [...

  14. History of NAMES Conferences

    Science.gov (United States)

    Filippov, Lev

    2013-03-01

    France and the Lorraine Region Council. The conferences have indicated directions for future research and stimulated the possibilities of cooperation between scientists from Lorraine and Russian universities and academic institutions. The participants of the conferences reviewed the remarkable worldwide progress with numerous breakthroughs in areas of fundamental research and industrial applications, specifically in the fields of nanomaterials and nanotechnologies, surface engineering, biomaterials and multifunctional coatings, functionally graded materials, new materials for microelectronics and optics, nanostructured thin films and nanodispersion strengthening coatings, combustion synthesis, new micro- and nanosystems and devices, natural resources, environmental sciences, clean technology, and recently, natural fibrous materials, etc. The participants consider that new fundamental knowledge, new materials, and industrial production methods generated as a result of international cooperation between both countries will be of interest to the industrial sector in Lorraine and Moscow, France and Russia. Professor Lev O Filippov Coordinator of NAMES conferences The PDF also contains details of the conference sponsors and organizing committees.

  15. Federal Council on Science, Engineering and Technology: Committee on Computer Research and Applications, Subcommittee on Science and Engineering Computing: The US Supercomputer Industry

    Energy Technology Data Exchange (ETDEWEB)

    1987-12-01

    The Federal Coordinating Council on Science, Engineering, and Technology (FCCSET) Committee on Supercomputing was chartered by the Director of the Office of Science and Technology Policy in 1982 to examine the status of supercomputing in the United States and to recommend a role for the Federal Government in the development of this technology. In this study, the FCCSET Committee (now called the Subcommittee on Science and Engineering Computing of the FCCSET Committee on Computer Research and Applications) reports on the status of the supercomputer industry and addresses changes that have occured since issuance of the 1983 and 1985 reports. The review based upon periodic meetings with and site visits to supercomputer manufacturers and consultation with experts in high performance scientific computing. White papers have been contributed to this report by industry leaders and supercomputer experts.

  16. Asthma: NIH-Sponsored Research and Clinical Trials | NIH MedlinePlus the Magazine

    Science.gov (United States)

    ... of this page please turn Javascript on. Feature: Asthma Asthma: NIH-Sponsored Research and Clinical Trials Past Issues / Fall 2011 Table of Contents NIH-Sponsored Research Asthma in the Inner City: Recognizing that asthma severity ...

  17. New Product Development. Engineering and Commerce Students Join Forces with a Corporate Sponsor.

    Science.gov (United States)

    Audet, Josee; Pegna, Joseph

    2001-01-01

    Mechanical engineering and business student teams developed new products using a corporate sponsor's technology in a simulated business setting. Students learned about product development and venture start-up, and the sponsor gained new applications for its patented technology. (SK)

  18. New Product Development. Engineering and Commerce Students Join Forces with a Corporate Sponsor.

    Science.gov (United States)

    Audet, Josee; Pegna, Joseph

    2001-01-01

    Mechanical engineering and business student teams developed new products using a corporate sponsor's technology in a simulated business setting. Students learned about product development and venture start-up, and the sponsor gained new applications for its patented technology. (SK)

  19. PROCEEDINGS OF THE 1999 OIL HEAT TECHNOLOGY CONFERENCE AND WORKSHOP.

    Energy Technology Data Exchange (ETDEWEB)

    MCDONALD,R.J.

    1999-04-01

    The 1999 Oil Heat Technology Conference and Workshop, April 15-16 at Brookhaven National Laboratory (BNL) is sponsored by the U. S. Department of Energy, Office of Building Technology, State and Community Programs (DOEBTS). The meeting is also co-sponsored by the: Petroleum Marketers Association of America, New England Fuel Institute, Oilheat Manufacturers Association, National Association of Oil Heat Service Managers, New York State Energy Research and Development Authority, Empire State Petroleum Association, New York Oil Heating Association, Oil Heat Institute of Long Island, and the Pennsylvania Petroleum Association. BNL is proud to acknowledge all of our 1999 co-sponsors, without their help and support the conference would have been canceled due to budget restrictions. It is quite gratifying to see an industry come together to help support an activity like the technology conference, for the benefit of the industry as a whole. The 1999 Oil Heat Technology Conference and Workshop, will be the thirteenth since 1984, is a very valuable technology transfer activity supported by the ongoing Combustion Equipment Technology (Oilheat R and D) program at BNL. The foremost reason for the conference is to provide a platform for the exchange of information and perspectives among international researchers, engineers, manufacturers, service technicians, and marketers of oil-fired space-conditioning equipment. They will provide a conduit by which information and ideas can be exchanged to examine present technologies, as well as helping to develop the future course for oil heating advancement. These conferences also serve as a stage for unifying government representatives, researchers, fuel oil marketers, and other members of the oil-heat industry in addressing technology advancements in this important energy use sector.

  20. Combinatorics Advances : Papers from a Conference

    CERN Document Server

    Mahmoodian, Ebadollah

    1995-01-01

    On March 28~31, 1994 (Farvardin 8~11, 1373 by Iranian calendar), the Twenty­ fifth Annual Iranian Mathematics Conference (AIMC25) was held at Sharif University of Technology in Tehran, Islamic Republic of Iran. Its sponsors in~ eluded the Iranian Mathematical Society, and the Department of Mathematical Sciences at Sharif University of Technology. Among the keynote speakers were Professor Dr. Andreas Dress and Professor Richard K. Guy. Their plenary lec~ tures on combinatorial themes were complemented by invited and contributed lectures in a Combinatorics Session. This book is a collection of refereed papers, submitted primarily by the participants after the conference. The topics covered are diverse, spanning a wide range of combinatorics and al~ lied areas in discrete mathematics. Perhaps the strength and variety of the pa~ pers here serve as the best indications that combinatorics is advancing quickly, and that the Iranian mathematics community contains very active contributors. We hope that you find the p...

  1. Conference on Manned Systems Design : New Methods and Equipment

    CERN Document Server

    Kraiss, K-F

    1981-01-01

    This volume contains the proceedings of a conference held in Freiburg, West Germany, September 22-25, 1980, entitled "Manned Systems Design, New Methods and Equipment". The conference was sponsored by the Special Programme Panel on Human Factors of the Scientific Affairs Division of NATO, and supported by Panel VIII, AC/243, on "Human and Biomedical Sciences". Their sponsorship and support are gratefully acknowledged. The contributions in the book are grouped according to the main themes of the conference with special emphasis on analytical approaches, measurement of performance, and simulator design and evaluat ion. The design of manned systems covers many and highly diversified areas. Therefore, a conference under the general title of "Manned Systems Design" is rather ambitious in itself. However, scientists and engineers engaged in the design of manned systems very often are confronted with problems that can be solved only by having several disciplines working together. So it was felt that knowledge about ...

  2. International Conference held at the University of Alberta

    CERN Document Server

    Strobeck, Curtis

    1983-01-01

    This volume contains the Proceedings of the International Conference in Population Biology held at The University of Alberta, Edmonton, Canada from June 22 to June 30, 1982. The Conference was sponsored by The University of Alberta and The Canadian Applied Mathematics Society, and overlapped with the summer meeting of CAMS. The main objectives of this Conference were: to bring mathematicians and biologists together so that they may interact for their mutual benefit; to bring those researchers interested in modelling in ecology and those interested in modelling in genetics together; to bring in keynote speakers in the delineated areas; to have sessions of contributed papers; and to present the opportunity for researchers to conduct workshops. With the exception of the last one, the objec­ tives were carried out. In order to lend some focus to the Conference, the following themes were adopted: models of species growth, predator-prey, competition, mutualism, food webs, dispersion, age structure, stability, evol...

  3. NATO Conference on Manpower Planning and Organization Design

    CERN Document Server

    Niehaus, Richard

    1978-01-01

    This volume is the proceedings of the conference entitled "Manpower Planning and Organization Design" which was held in Stresa, Italy, 20-24 June 1977. The Conference was sponsored by the NATO Scientific Affairs Division and organized jointly through the Special Programs Panels on Human Factors and on Systems Science. Two Conference Directors were appointed with overall responsibilities for the programme and for policy, and they were assisted in their tasks by a small advisory panel consisting of Professor A. Charnes (University of Texas), Professor W.W. Cooper (Carnegie Mellon University, now at Harvard University) and Dr. F.A. Heller (TavistQck Institute of Human Relations). Professor R. Florio of Bergamo kindly agreed to become Administrative Director and, as such, was responsible for all the local arrangements. The Conference Directors were further assisted by "national points of contact" appointed from each of the member countries of NATO. These national representatives played a substantial part in the s...

  4. International Asia Conference on Industrial Engineering and Management Innovation

    CERN Document Server

    Shen, Jiang; Dou, Runliang

    2013-01-01

    The International Conference on Industrial Engineering and Engineering Management is sponsored by the Chinese Industrial Engineering Institution, CMES, which is the only national-level academic society for Industrial Engineering. The conference is held annually as the major event in this arena. Being the largest and the most authoritative international academic conference held in China, it provides an academic platform for experts and entrepreneurs in the areas of international industrial engineering and management to exchange their research findings. Many experts in various fields from China and around the world gather together at the conference to review, exchange, summarize and promote their achievements in the fields of industrial engineering and engineering management. For example, some experts pay special attention to the current state of the application of related techniques in China as well as their future prospects, such as green product design, quality control and management, supply chain and logist...

  5. A Framework for HI Spectral Source Finding Using Distributed-Memory Supercomputing

    CERN Document Server

    Westerlund, Stefan

    2014-01-01

    The latest generation of radio astronomy interferometers will conduct all sky surveys with data products consisting of petabytes of spectral line data. Traditional approaches to identifying and parameterising the astrophysical sources within this data will not scale to datasets of this magnitude, since the performance of workstations will not keep up with the real-time generation of data. For this reason, it is necessary to employ high performance computing systems consisting of a large number of processors connected by a high-bandwidth network. In order to make use of such supercomputers substantial modifications must be made to serial source finding code. To ease the transition, this work presents the Scalable Source Finder Framework, a framework providing storage access, networking communication and data composition functionality, which can support a wide range of source finding algorithms provided they can be applied to subsets of the entire image. Additionally, the Parallel Gaussian Source Finder was imp...

  6. Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base

    Science.gov (United States)

    Katz, Randy H.; Ousterhout, John K.; Patterson, David A.

    1993-01-01

    Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.

  7. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Directory of Open Access Journals (Sweden)

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  8. Large-scale integrated super-computing platform for next generation virtual drug discovery.

    Science.gov (United States)

    Mitchell, Wayne; Matsumoto, Shunji

    2011-08-01

    Traditional drug discovery starts by experimentally screening chemical libraries to find hit compounds that bind to protein targets, modulating their activity. Subsequent rounds of iterative chemical derivitization and rescreening are conducted to enhance the potency, selectivity, and pharmacological properties of hit compounds. Although computational docking of ligands to targets has been used to augment the empirical discovery process, its historical effectiveness has been limited because of the poor correlation of ligand dock scores and experimentally determined binding constants. Recent progress in super-computing, coupled to theoretical insights, allows the calculation of the Gibbs free energy, and therefore accurate binding constants, for usually large ligand-receptor systems. This advance extends the potential of virtual drug discovery. A specific embodiment of the technology, integrating de novo, abstract fragment based drug design, sophisticated molecular simulation, and the ability to calculate thermodynamic binding constants with unprecedented accuracy, are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  10. Operational numerical weather prediction on a GPU-accelerated cluster supercomputer

    Science.gov (United States)

    Lapillonne, Xavier; Fuhrer, Oliver; Spörri, Pascal; Osuna, Carlos; Walser, André; Arteaga, Andrea; Gysi, Tobias; Rüdisühli, Stefan; Osterried, Katherine; Schulthess, Thomas

    2016-04-01

    The local area weather prediction model COSMO is used at MeteoSwiss to provide high resolution numerical weather predictions over the Alpine region. In order to benefit from the latest developments in computer technology the model was optimized and adapted to run on Graphical Processing Units (GPUs). Thanks to these model adaptations and the acquisition of a dedicated hybrid supercomputer a new set of operational applications have been introduced, COSMO-1 (1 km deterministic), COSMO-E (2 km ensemble) and KENDA (data assimilation) at MeteoSwiss. These new applications correspond to an increase of a factor 40x in terms of computational load as compared to the previous operational setup. We present an overview of the porting approach of the COSMO model to GPUs together with a detailed description of and performance results on the new hybrid Cray CS-Storm computer, Piz Kesch.

  11. A CPU/MIC Collaborated Parallel Framework for GROMACS on Tianhe-2 Supercomputer.

    Science.gov (United States)

    Peng, Shaoliang; Yang, Shunyun; Su, Wenhe; Zhang, Xiaoyu; Zhang, Tenglilang; Liu, Weiguo; Zhao, Xingming

    2017-06-16

    Molecular Dynamics (MD) is the simulation of the dynamic behavior of atoms and molecules. As the most popular software for molecular dynamics, GROMACS cannot work on large-scale data because of limit computing resources. In this paper, we propose a CPU and Intel® Xeon Phi Many Integrated Core (MIC) collaborated parallel framework to accelerate GROMACS using the offload mode on a MIC coprocessor, with which the performance of GROMACS is improved significantly, especially with the utility of Tianhe-2 supercomputer. Furthermore, we optimize GROMACS so that it can run on both the CPU and MIC at the same time. In addition, we accelerate multi-node GROMACS so that it can be used in practice. Benchmarking on real data, our accelerated GROMACS performs very well and reduces computation time significantly. Source code: https://github.com/tianhe2/gromacs-mic.

  12. Mixed precision numerical weather prediction on hybrid GPU-CPU supercomputers

    Science.gov (United States)

    Lapillonne, Xavier; Osuna, Carlos; Spoerri, Pascal; Osterried, Katherine; Charpilloz, Christophe; Fuhrer, Oliver

    2017-04-01

    A new version of the climate and weather model COSMO that runs faster on traditional high performance computing systems with CPUs as well as on heterogeneous architectures using graphics processing units (GPUs) has been developed. The model was in addition adapted to be able to run in "single precision" mode. After discussing the key changes introduced in this new model version and the tools used in the porting approach, we present 3 applications, namely the MeteoSwiss operational weather prediction system, COSMO-LEPS and the CALMO project, which already take advantage of the performance improvement, up to a factor 4, by running on GPU system and using the single precision mode. We discuss how the code changes open new perspectives for scientific research and can enable researchers to get access to a new class of supercomputers.

  13. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  14. Modern Gyrokinetic Particle-In-Cell Simulation of Fusion Plasmas on Top Supercomputers

    CERN Document Server

    Wang, Bei; Tang, William; Ibrahim, Khaled; Madduri, Kamesh; Williams, Samuel; Oliker, Leonid

    2015-01-01

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability of the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon...

  15. Dawning Nebulae: A PetaFLOPS Supercomputer with a Heterogeneous Structure

    Institute of Scientific and Technical Information of China (English)

    Ning-Hui Sun; Jing Xing; Zhi-Gang Huo; Guang-Ming Tan; Jin Xiong; Bo Li; Can Ma

    2011-01-01

    Dawning Nebulae is a heterogeneous system composed of 9280 multi-core x86 CPUs and 4640 NVIDIA Fermi GPUs. With a Linpack performance of 1.271 petaFLOPS, it was ranked the second in the TOP500 List released in June 2010. In this paper, key issues in the system design of Dawning Nebulae are introduced. System tuning methodologies aiming at petaFLOPS Linpack result are presented, including algorithmic optimization and communication improvement. The design of its file I/O subsystem, including HVFS and the underlying DCFS3, is also described. Performance evaluations show that the Linpack efficiency of each node reaches 69.89%, and 1024-node aggregate read and write bandwidths exceed 100 GB/s and 70 GB/s respectively. The success of Dawning Nebulae has demonstrated the viability of CPU/GPU heterogeneous structure for future designs of supercomputers.

  16. 2nd International Conference on Advanced Intelligent Systems and Informatics

    CERN Document Server

    Shaalan, Khaled; Gaber, Tarek; Azar, Ahmad; Tolba, M

    2017-01-01

    This book gathers the proceedings of the 2nd International Conference on Advanced Intelligent Systems and Informatics (AISI2016), which took place in Cairo, Egypt during October 24–26, 2016. This international interdisciplinary conference, which highlighted essential research and developments in the field of informatics and intelligent systems, was organized by the Scientific Research Group in Egypt (SRGE) and sponsored by the IEEE Computational Intelligence Society (Egypt chapter) and the IEEE Robotics and Automation Society (Egypt Chapter). The book’s content is divided into four main sections: Intelligent Language Processing, Intelligent Systems, Intelligent Robotics Systems, and Informatics.

  17. 10th International Conference on Genetic and Evolutionary Computing

    CERN Document Server

    Lin, Jerry; Wang, Chia-Hung; Jiang, Xin

    2017-01-01

    This book gathers papers presented at the 10th International Conference on Genetic and Evolutionary Computing (ICGEC 2016). The conference was co-sponsored by Springer, Fujian University of Technology in China, the University of Computer Studies in Yangon, University of Miyazaki in Japan, National Kaohsiung University of Applied Sciences in Taiwan, Taiwan Association for Web Intelligence Consortium, and VSB-Technical University of Ostrava, Czech Republic. The ICGEC 2016, which was held from November 7 to 9, 2016 in Fuzhou City, China, was intended as an international forum for researchers and professionals in all areas of genetic and evolutionary computing.

  18. 7 CFR 226.12 - Administrative payments to sponsoring organizations for day care homes.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false Administrative payments to sponsoring organizations... CARE FOOD PROGRAM Payment Provisions § 226.12 Administrative payments to sponsoring organizations for day care homes. (a) General. Sponsoring organizations for day care homes shall receive payments for...

  19. 44 CFR 208.34 - Agreements between Sponsoring Agencies and others.

    Science.gov (United States)

    2010-10-01

    ... 44 Emergency Management and Assistance 1 2010-10-01 2010-10-01 false Agreements between Sponsoring... SYSTEM Response Cooperative Agreements § 208.34 Agreements between Sponsoring Agencies and others. Sponsoring Agencies are responsible for executing such agreements with Participating Agencies and Affiliated...

  20. 7 CFR 226.13 - Food service payments to sponsoring organizations for day care homes.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false Food service payments to sponsoring organizations for... CARE FOOD PROGRAM Payment Provisions § 226.13 Food service payments to sponsoring organizations for day care homes. (a) Payments shall be made only to sponsoring organizations operating under an agreement...

  1. 42 CFR 423.553 - Effect of leasing of a PDP sponsor's facilities.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Effect of leasing of a PDP sponsor's facilities... Change of Ownership or Leasing of Facilities During Term of Contract § 423.553 Effect of leasing of a PDP sponsor's facilities. (a) General effect of leasing. If a PDP sponsor leases all or part of its...

  2. Employer-sponsored health insurance and the gender wage gap.

    Science.gov (United States)

    Cowan, Benjamin; Schwab, Benjamin

    2016-01-01

    During prime working years, women have higher expected healthcare expenses than men. However, employees' insurance rates are not gender-rated in the employer-sponsored health insurance (ESI) market. Thus, women may experience lower wages in equilibrium from employers who offer health insurance to their employees. We show that female employees suffer a larger wage gap relative to men when they hold ESI: our results suggest this accounts for roughly 10% of the overall gender wage gap. For a full-time worker, this pay gap due to ESI is on the order of the expected difference in healthcare expenses between women and men.

  3. Provider-sponsored HMOs: make, buy, or joint venture?

    Science.gov (United States)

    Clay, S B

    1997-03-01

    Providers can sponsor their own HMOs in one of three ways: by creating their own HMO, by joint venturing with an existing HMO, or by purchasing an existing HMO. When selecting the best option, providers must consider various market conditions. Managed care penetration in the area, potential competitive responses of existing HMOs, market demand, provider reputation, and provider marketing ability will all influence the feasibility of each option. Providers also must examine their own organizational identity, their ability to raise the necessary capital to start an HMO, their managed care expertise and risk contracting experience, and their information systems capabilities.

  4. The Race for Sponsored Links: Bidding Patterns for Search Advertising

    OpenAIRE

    Zsolt Katona; Miklos Sarvary

    2010-01-01

    Paid placements on search engines reached sales of nearly $11 billion in the United States last year and represent the most rapidly growing form of online advertising today. In its classic form, a search engine sets up an auction for each search word in which competing websites bid for their sponsored links to be displayed next to the search results. We model this advertising market, focusing on two of its key characteristics: (1) the interaction between the list of search results and the lis...

  5. Scalability Test of multiscale fluid-platelet model for three top supercomputers

    Science.gov (United States)

    Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny

    2016-07-01

    We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources.

  6. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    Science.gov (United States)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  7. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  8. CPAFFC Working Group Attends Sino-African Initiative Conference

    Institute of Scientific and Technical Information of China (English)

    Duan; Jun

    2013-01-01

    <正>A CPAFFC working group led by its Vice President Xie Yuan attended the 2013 Sino-African Initiative (SAI) Conference sponsored by Sister Cities International (SCI) of the United States and organized by the Eastern Africa Sister Cities (EASC) in Nairobi, Kenya, from January 31 to February 2. About 60 officials and representatives of sister cities associations from China, the United States, Kenya, Nigeria and

  9. First International Electronic Conference on Medicinal Chemistry (ECMC-1)

    Science.gov (United States)

    Mayence, Annie; Vanden Eynde, Jean Jacques

    2016-01-01

    The first International Electronic Conference on Medicinal Chemistry, organized and sponsored by MDPI AG, publisher, and the Journal Pharmaceuticals, took place in November 2015 on the SciForum website. More than 200 authors from 18 countries participated in the event and was attended by 25,000 visitors who had the opportunity to browse among 55 presentations, keynotes, and videos. A short description of some works presented during that scientific meeting is disclosed in this report.

  10. Conference Report: ESF-COST High-Level Research Conference Natural Products Chemistry, Biology and Medicine III.

    Science.gov (United States)

    Catino, Arthur

    2010-12-01

    Natural Products Chemistry, Biology and Medicine III was the third conference in a series of events sponsored by the European Science Foundation (ESF) and the European Cooperation in the field of Scientific and Technical Research (COST). Scientists came together from within and outside the EU to present cutting-edge developments in chemical synthesis. Research areas included the synthesis of natural products, methods development, isolation/structural elucidation and chemical biology. As our capacity to produce new chemotherapeutic agents relies on chemical synthesis, this year's conference has never been so timely. This report highlights several of the scientific contributions presented during the meeting.

  11. Introduction to the Text Retrieval Conference%文本检索会议简介

    Institute of Scientific and Technical Information of China (English)

    吴立德; 黄萱菁

    2002-01-01

    Text Retrieval Conference (TREC), which is sponsored by the National Institute of Standards and Tech-nology as well as the Defense Advanced Research Projects Agency, is the most authoritative international evaluationconference about text retrieval. This paper describes the Ninth Text Retrieval Conference held in 2000 and its fourmain tracks, which are Question Answering, Web Retrieval, Cross Language Information Retrieval and Text Filter-ing, from the aspects of test topics, corpus, evaluation metrics and results.

  12. Clinical research and industrial sponsoring: avenues towards transparency and credibility.

    Science.gov (United States)

    Hildebrandt, M; Ludwig, W-D

    2003-12-01

    Clinical research is intended to serve the patient, in the pursuit of a deepened understanding of physiological interactions and their changes in disease, and of potentially beneficial implications for the patient. The impetus to perform clinical research is shaped by various intentions, such as the desire to provide cure or relief, striving for personal and professional success, public attention, financial considerations, or simply scientific curiosity. A similarly wide range of diverging interests must be assumed to impinge on diagnostic and therapeutic decisions in clinical work. How are we to perform clinical research and therapy with the patients' benefit in mind, in view of such a complex motivational status, and how are we to perceive the peculiar interests of those influencing clinical work, including ourselves? In this review, we attempt to elucidate the complex pathways of interaction between physicians and industrial sponsors. Special attention will be paid to the following topics: the pharmaceutical market, public interests, legal and ethical issues, conflicts of interest, and the potential impact of industry-sponsored drug trials on medical information and subsequent therapeutic decisions. We will conclude with recommendations for an acceptable position in the tension between cooperation and corruptibility, a position that grants priority to the patient's needs rather than third party interests. Copyright 2003 S. Karger GmbH, Freiburg

  13. An editor's considerations in publishing industry-sponsored studies.

    Science.gov (United States)

    Droller, Michael J

    2015-03-01

    The fundamental responsibility of a journal editor is to assure that studies accepted for publication provide rigorous original scientific information and reviews that are considered important to the readership. The fundamental requirements of such reports from an editor's perspective include objectivity and transparency in each of the study design, implementation of investigation methods, acquisition of data, inclusive analysis and interpretation of results, appropriate application of statistical methods, presentation of outcomes in the context of a balanced and comprehensive review of relevant literature, and meaningful conclusions. In proceeding on these presumptions, editors then have the responsibility of obtaining rigorous, objective, and constructive reviews of these reports so that they can make an unbiased decision regarding their disposition. The fundamental objective in this is to enhance the ultimate scientific validity and value of the work if and when it is accepted for publication. Guidelines have been advanced by several organizations to identify how such editorial responsibilities can be fulfilled. These guidelines also pertain to investigators, authors, and sponsors of the studies, which the various reports and reviews describe. The present article reviews these guidelines as they relate to both industry-sponsored and investigator-initiated investigations and as relevant to the variety of reports that a scientific/medical journal such as Urologic Oncology: Seminars and Original Investigations receives for publication.

  14. Report on a Boston University Conference December 7-8, 2012 on "How Can the History and Philosophy of Science Contribute to Contemporary US Science Teaching?"

    Science.gov (United States)

    Garik, Peter; Benétreau-Dupin, Yann

    2014-01-01

    This is an editorial report on the outcomes of an international conference sponsored by a grant from the National Science Foundation (NSF) (REESE-1205273) to the School of Education at Boston University and the Center for Philosophy and History of Science at Boston University for a conference titled: "How Can the History and Philosophy of…

  15. BioSTEC 2017: 10th International Joint Conference on Biomedical Engineering Systems and Technologies : Proceedings Volume 5: HealthInf

    NARCIS (Netherlands)

    2017-01-01

    This book contains the proceedings of the 10th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2017). This conference is sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC), in cooperation with the ACM

  16. Report on a Boston University Conference December 7-8, 2012 on "How Can the History and Philosophy of Science Contribute to Contemporary US Science Teaching?"

    Science.gov (United States)

    Garik, Peter; Benétreau-Dupin, Yann

    2014-01-01

    This is an editorial report on the outcomes of an international conference sponsored by a grant from the National Science Foundation (NSF) (REESE-1205273) to the School of Education at Boston University and the Center for Philosophy and History of Science at Boston University for a conference titled: "How Can the History and Philosophy of…

  17. How To Free Our People: Real Life Solutions--A National Conference (Kansas City, Missouri, May 21-23, 2003). Participant's Manual.

    Science.gov (United States)

    Darling, Bruce; Lowry, Kirk; Langbehn, Kristy; Stamper, Dustin; Petty, Richard; Heinsohn, Dawn; Michaels, Bob; Hughey, Anne-Marie

    This document is the participant's manual for a 3-day training conference for professionals involved in transition and the independent living movement for individuals with disabilities. Preliminary information includes the conference agenda, background information on the trainers and the sponsoring organizations, and the learning objectives of the…

  18. Conference on Resource Sharing in Southern and Central Africa (Dar-es-Salaam, Tanzania, December 16-19, 1985). Final Report.

    Science.gov (United States)

    United Nations Educational, Scientific and Cultural Organization, Paris (France). General Information Programme.

    This document summarizes the activities of a conference held at the Institute of Finance Management in Tanzania on information resource sharing in Southern and Central Africa. Delegates and observers from Lesotho, Swaziland, Mozambique, Botswana, Zimbabwe, Malawi, Zambia, and Tanzania attended the conference. The 15 participants, 8 sponsored by…

  19. IEEE Conference on Software Engineering Education and Training (CSEE&T 2012) Proceedings (25th, Nanjing, Jiangsu, China, April 17-19, 2012)

    Science.gov (United States)

    IEEE Conference on Software Engineering Education and Training, Proceedings (MS), 2012

    2012-01-01

    The Conference on Software Engineering Education and Training (CSEE&T) is the premier international peer-reviewed conference, sponsored by the Institute of Electrical and Electronics Engineers, Inc. (IEEE) Computer Society, which addresses all major areas related to software engineering education, training, and professionalism. This year, as in…

  20. IEEE Conference on Software Engineering Education and Training (CSEE&T 2012) Proceedings (25th, Nanjing, Jiangsu, China, April 17-19, 2012)

    Science.gov (United States)

    IEEE Conference on Software Engineering Education and Training, Proceedings (MS), 2012

    2012-01-01

    The Conference on Software Engineering Education and Training (CSEE&T) is the premier international peer-reviewed conference, sponsored by the Institute of Electrical and Electronics Engineers, Inc. (IEEE) Computer Society, which addresses all major areas related to software engineering education, training, and professionalism. This year, as…

  1. IEEE Conference on Software Engineering Education and Training (CSEE&T 2012) Proceedings (25th, Nanjing, Jiangsu, China, April 17-19, 2012)

    Science.gov (United States)

    IEEE Conference on Software Engineering Education and Training, Proceedings (MS), 2012

    2012-01-01

    The Conference on Software Engineering Education and Training (CSEE&T) is the premier international peer-reviewed conference, sponsored by the Institute of Electrical and Electronics Engineers, Inc. (IEEE) Computer Society, which addresses all major areas related to software engineering education, training, and professionalism. This year, as…

  2. "Food company sponsors are kind, generous and cool": (mis)conceptions of junior sports players.

    Science.gov (United States)

    Kelly, Bridget; Baur, Louise A; Bauman, Adrian E; King, Lesley; Chapman, Kathy; Smith, Ben J

    2011-09-05

    Children's exposure to unhealthy food marketing influences their food knowledge, preferences and consumption. Sport sponsorship by food companies is widespread and industry investment in this marketing is increasing. This study aimed to assess children's awareness of sport sponsors and their brand-related attitudes and purchasing intentions in response to this marketing. Sports clubs known to have food sponsors and representing the most popular sports for Australian children across a range of demographic areas were recruited. Interview-based questionnaires were conducted at clubs with children aged 10-14 years (n = 103) to examine their recall of local sports club and elite sport sponsors, and their attitudes towards sponsors and sponsorship activities. Most children (68%) could recall sponsors of their sports club, naming a median of two sponsors, including a median of one food company sponsor each. Almost half (47%) of children could recall any sponsors of their favourite elite sporting team. Children aged 10-11 years were more likely than older children to report that they thought about sponsors when buying something to eat or drink (P sport performance (86% and 76%, respectively). Around one-third of children reported liking the company more after receiving these rewards. Children's high recall of food and beverage company sport sponsors and their positive attitudes towards these sponsors and their promotions is concerning as this is likely to be linked to children's food preferences and consumption. Limiting children's exposure to this marketing is an important initiative to improve children's nutrition.

  3. "Food company sponsors are kind, generous and cool": (Misconceptions of junior sports players

    Directory of Open Access Journals (Sweden)

    King Lesley

    2011-09-01

    Full Text Available Abstract Background Children's exposure to unhealthy food marketing influences their food knowledge, preferences and consumption. Sport sponsorship by food companies is widespread and industry investment in this marketing is increasing. This study aimed to assess children's awareness of sport sponsors and their brand-related attitudes and purchasing intentions in response to this marketing. Methods Sports clubs known to have food sponsors and representing the most popular sports for Australian children across a range of demographic areas were recruited. Interview-based questionnaires were conducted at clubs with children aged 10-14 years (n = 103 to examine their recall of local sports club and elite sport sponsors, and their attitudes towards sponsors and sponsorship activities. Results Most children (68% could recall sponsors of their sports club, naming a median of two sponsors, including a median of one food company sponsor each. Almost half (47% of children could recall any sponsors of their favourite elite sporting team. Children aged 10-11 years were more likely than older children to report that they thought about sponsors when buying something to eat or drink (P Conclusions Children's high recall of food and beverage company sport sponsors and their positive attitudes towards these sponsors and their promotions is concerning as this is likely to be linked to children's food preferences and consumption. Limiting children's exposure to this marketing is an important initiative to improve children's nutrition.

  4. An Exploratory Investigation of Important Qualities and Characteristics of Alcoholics Anonymous Sponsors.

    Science.gov (United States)

    Stevens, Edward B; Jason, Leonard A

    Alcoholics Anonymous recommends members to have sponsors, especially those early in their recovery, yet little research has been done on the qualities of an effective sponsor. 245 adults (117 females, 128 males) currently in substance use disorder recovery participated. 231 of these individuals had experience as a sponsor, sponsee or both (109 had experience as a sponsor). Qualitative results suggest effective sponsors are currently engaged in the program on a personal level, are trustworthy, and are available although a wide variety of attributes were cited. In a choice and ranking exercise, 12- step engagement and qualities of character were also most often ranked highly. No significant differences were found between genders or sponsor/sponsee roles. Implications based on breadth of responses and dominant themes are discussed as well as the need for further research on sponsor/sponsee characteristics, satisfaction, and recovery outcomes.

  5. Conference this! Lead Pipers compare conference experiences

    Directory of Open Access Journals (Sweden)

    Editorial Board

    2010-04-01

    Full Text Available As library travel budgets are increasingly slashed around the country, it’s a tough time for conference-going. In this group post, we compare notes about the conferences we’ve attended, which have been our favorites, and why. We hope this will generate creative ideas on good conferences (online or in-person to look forward to, and maybe offer [...

  6. Surgeon-industry conflict of interest: survey of opinions regarding industry-sponsored educational events and surgeon teaching: clinical article.

    Science.gov (United States)

    DiPaola, Christian P; Dea, Nicolas; Dvorak, Marcel F; Lee, Robert S; Hartig, Dennis; Fisher, Charles G

    2014-03-01

    Conflict of interest (COI) as it applies to medical education and training has become a source of considerable interest, debate, and regulation in the last decade. Companies often pay surgeons as faculty for educational events and often sponsor and give financial support to major professional society meetings. Professional medical societies, industry, and legislators have attempted to regulate potential COI without consideration for public opinion. The practice of evidence-based medicine requires the inclusion of patient opinion along with best available evidence and expert opinion. The primary goal of this study was to assess the opinion of the general population regarding surgeon-industry COI for education-related events. A Web-based survey was administered, with special emphasis on the surgeon's role in industry-sponsored education and support of professional societies. A survey was constructed to sample opinions on reimbursement, disclosure, and funding sources for educational events. There were 501 completed surveys available for analysis. More than 90% of respondents believed that industry funding for surgeons' tuition and travel for either industry-sponsored or professional society educational meetings would either not affect the quality of care delivered or would cause it to improve. Similar results were generated for opinions on surgeons being paid by industry to teach other surgeons. Moreover, the majority of respondents believed it was ethical or had no opinion if surgeons had such a relationship with industry. Respondents were also generally in favor of educational conferences for surgeons regardless of funding source. Disclosures of a surgeon-industry relationship, especially if it involves specific devices that may be used in their surgery, appears to be important to respondents. The vast majority of respondents in this study do not believe that the quality of their care will be diminished due to industry funding of educational events, for surgeon

  7. Peace Education: Glimpses from the EUPRA Conference in Firenze. Peace Education Reports No. 5.

    Science.gov (United States)

    Bjerstedt, Ake, Ed.

    This report presents the material from a workshop on peace education that was part of a conference sponsored by the European Peace Research Association (EUPRA). Two papers, "Research as a Tool for Peace Education" (Alberto L'Abate) and "Promoting Commitment to Peace and Environmental Responsibility" (Riitta Wahlstrom), are documented in part 1 of…

  8. The Uniqueness of Collective Bargaining in Higher Education. Proceedings, Sixth Annual Conference, April 1978.

    Science.gov (United States)

    Levenstein, Aaron, Ed.; Lang, Theodore H.

    The proceedings of a conference on collective bargaining in higher education sponsored by the National Center for the Study of Collective Bargaining in Higher Education are presented. The contents are as follows: an introduction; welcoming address by Joel Segall; keynote address by Harold Newman; "The Impact of Collective Bargaining Upon Those Who…

  9. Research and development conference: California Institute for Energy Efficiency (CIEE) program

    Energy Technology Data Exchange (ETDEWEB)

    1991-01-01

    CIEE's first Research and Development Conference will introduce you to some of the results achieved to date through CIEE-sponsored multiyear research performed in three programs: building energy efficiency, air quality impacts of energy efficiency, and end-use resource planning. Results from scoping studies, Director's discretionary research, and exploratory research will also be featured.

  10. Embodied Religion. Proceedings of the 2012 Conference of the European Society for Philosophy of Religion

    NARCIS (Netherlands)

    Jonkers, Peter; Sarot, Marcel

    2013-01-01

    This collection of papers is derived from the nineteenth biannual conference of the European Society for Philosophy of Religion, held in the ‘Kontakt der Kontinenten’ in Soesterberg, the Netherlands, from 30 August to 2 September 2012, which was sponsored by the School of Catholic Theology

  11. ICSOFT 2006 : First International Conference on Software and Data Technologies, Volume 1

    NARCIS (Netherlands)

    Filipe, Joaquim; Shishkov, Boris; Helfert, Markus

    2006-01-01

    This volume contains the proceedings of the first International Conference on Software and Data Technologies (ICSOFT 2006), organized by the Institute for Systems and Technologies of Information, Communication and Control (INSTICC) in cooperation with the Object Management Group (OMG), sponsored by

  12. Adelante, Mujer Hispana: A Conference Model for Hispanic Women. Pamphlet 20.

    Science.gov (United States)

    Women's Bureau (DOL), Washington, DC.

    The model is based on the highly successful first Women's Bureau sponsored Colorado Education and Employment Conference for Hispanic Women ("Adelante, Mujer Hispana") held in January 1980 for low-income women seeking employment and employed women seeking better jobs and upward mobility. It is intended for use by groups and individuals in planning…

  13. Extending the Dream: A Report of the 1975 Artists-in-Schools National Conference.

    Science.gov (United States)

    Gross, Ronald

    The document reports on a conference which reviewed progress of the Artists-in-Schools (AIS) program. Sponsored by the National Endowment for the Arts, the program places professional artists in elementary and secondary schools for residencies of several days to a full year. Artists, educators, and AIS state coordinators who participated in the…

  14. Cambridge Healthtech Institute fourth annual conference in structure-based drug design.

    Science.gov (United States)

    McInnes, Campbell

    2004-06-01

    The CHI-sponsored fourth annual meeting, held at the Sheraton hotel in Boston, USA, was attended by approximately 120 delegates, mainly from the pharmaceutical and biotechnology industries. The theme of the conference focused on new developments and validation of current techniques in structure-based drug design, as well as the successful application of these methods in drug development.

  15. Proceedings of the 2014 7th IFIP Wireless and Mobile Networking Conference (WMNC)

    NARCIS (Netherlands)

    Monteiro, E.; Curado, M.; Heijenk, Gerhard J.; Braun, T.; Granjal, J.; Unknown, [Unknown

    2014-01-01

    Message from the chairs We had the great pleasure to welcome all participants to the 7th IFIP Wireless and Mobile Networking Conference. (WMNC 2014), organized by the University of Coimbra, and held in Vilamoura, Portugal, May 20 – 22, 2014. WMNC 2014 was sponsored by IFIP TC6, and technically co-­‿

  16. 1987 Oak Ridge model conference: Proceedings: Volume I, Part 3, Waste Management

    Energy Technology Data Exchange (ETDEWEB)

    1987-01-01

    A conference sponsored by the United States Department of Energy (DOE), was held on waste management. Topics of discussion were transuranic waste management, chemical and physical treatment technologies, waste minimization, land disposal technology and characterization and analysis. Individual projects are processed separately for the data bases. (CBS)

  17. 1987 Oak Ridge model conference: Proceedings: Volume I, Part 2, Waste Management

    Energy Technology Data Exchange (ETDEWEB)

    1987-01-01

    A conference sponsored by the United States Department of Energy (DOE) was held on Waste Mangement. Topics discussed were waste stabilization technologies regulations and standards, innovative treatment technology, waste stabilization projects. Individual projects are processed separately for the data bases. (CBS)

  18. Government-sponsored microfinance program: Joint liability vs. individual liability

    Directory of Open Access Journals (Sweden)

    Arghya Kusum Mukherjee

    2014-12-01

    Full Text Available Swarnajayanti Gram Swarozgar Yojana (SGSY is a government-sponsored microfinance program. The scheme is based on four features: group lending with joint liability, progressive lending, back-ended subsidy, and social capital. We propose a new model of SGSY having these features: group lending with individual liability, progressive lending, back-ended subsidy, and social capital. “Joint liability” clause of the existing model is replaced with individual liability in the new model. The paper shows that problem of adverse selection is removed in both models, i.e. in “SGSY with group lending and joint liability” and “SGSY with group lending and individual liability.” The problem of “moral hazard” is more severe in the existing model of SGSY compared with the proposed model of SGSY. Borrowers are also benefitted from participation in the proposed scheme of SGSY than that in the existing model of SGSY.

  19. Teaching with Sponsored Instructional Materials: Attitudes of Teachers in Uganda

    Directory of Open Access Journals (Sweden)

    Ndawula Stephen

    2009-09-01

    Full Text Available The purpose of this study was to examine the teachers’ attitudes towards using sponsored instructional materials provided by the Aga Khan Education Service (AES to primary schools in Uganda. The objectives of the study were: to establish teachers’ attitudes towards the AES materials in relation to the level of class taught, size of classes and the nature of subject (s taught. Data was gathered from twenty five class teachers, using interview schedules. Majority of the teachers (93% expressed positive attitude, while only 7% had negative attitudes over using the AES materials. Conclusions were drawn and recommendations focused on: sensitization of teachers on importance AES; improving on close work relationship between primary schools and AES; extending AES materials to non-project schools and; further research on other variables rather than teachers’ attitudes.

  20. XXV IUPAP Conference on Computational Physics (CCP2013): Preface

    Science.gov (United States)

    2014-05-01

    XXV IUPAP Conference on Computational Physics (CCP2013) was held from 20-24 August 2013 at the Russian Academy of Sciences in Moscow, Russia. The annual Conferences on Computational Physics (CCP) present an overview of the most recent developments and opportunities in computational physics across a broad range of topical areas. The CCP series aims to draw computational scientists from around the world and to stimulate interdisciplinary discussion and collaboration by putting together researchers interested in various fields of computational science. It is organized under the auspices of the International Union of Pure and Applied Physics and has been in existence since 1989. The CCP series alternates between Europe, America and Asia-Pacific. The conferences are traditionally supported by European Physical Society and American Physical Society. This year the Conference host was Landau Institute for Theoretical Physics. The Conference contained 142 presentations, and, in particular, 11 plenary talks with comprehensive reviews from airbursts to many-electron systems. We would like to take this opportunity to thank our sponsors: International Union of Pure and Applied Physics (IUPAP), European Physical Society (EPS), Division of Computational Physics of American Physical Society (DCOMP/APS), Russian Foundation for Basic Research, Department of Physical Sciences of Russian Academy of Sciences, RSC Group company. Further conference information and images from the conference are available in the pdf.

  1. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  2. Biochemical Society sponsors event at Anglia Ruskin University, Cambridge.

    OpenAIRE

    Jones, Richard P. O.

    2015-01-01

    Attendance. This meeting brought together staff, postgraduate students taking ARU’s MSc in Biotechnology (led by Philip Warburton), and finalists taking either ARU’s BSc in Biomedical Science (led by Claire Pike) or University Centre Harlow’s (UCH) BSc in Bioscience (led by Linda King and Matt Webster). Approximately 135 students and 22 staff attended. The conference included undergraduates taking the module Current Advances in Biomedical Science (led by Richard Jones) and MSc students taking...

  3. Ethics committees and externally - sponsored research in Iran

    Directory of Open Access Journals (Sweden)

    Bagher Larijani

    2006-03-01

    Full Text Available Globally, there have been considerable debates on the ethical conduct and review of collaborative international researches. One of the great challenges is to conduct clinical trials in developing countries as Externally-Sponsored Researches (ESR. "nThis descriptive survey has reviewed the status of this type of researches in Iran during 2002 to 2003. This study was carried out in 44 universities of medical sciences and 32 research centers. The questionnaires containing closed and open-ended questions were sent to their Ethics Review Committees (ECs. After collection of data and coding, they were analyzed by means of SPSS software version 11.5. "nForty one universities and 25 research centers have been responded but only 35 ECs retrieved the questionnaire. According to the collected data, 26 and 54 studies have been carried out as a collaborative research or externally-sponsored researches in Iran in 2002 and 2003, respectively. Although more than half of ECs' members have received necessary preliminary educations but there were professional educational courses in only 25% of ECs. About 17% of ESR projects have not been examined in ECs, although they have been evaluated in the Scientific Research Council. Only one of the evaluated ESR proposals has been rejected and the rest have been approved and conducted. Socio-cultural issues and religious beliefs, scientific validity, and national priorities were the most important factors for proposals evaluation. "nThe ethical issues concerning international collaboration for clinical research in developing countries are complex. We should enhance educational programs for researchers and establish appropriate regulatory guidelines at national and international levels.

  4. Global Optimization for Advertisement Selection in Sponsored Search

    Institute of Scientific and Technical Information of China (English)

    崔卿; 白峰杉; 高斌; 刘铁岩

    2015-01-01

    Advertisement (ad) selection plays an important role in sponsored search, since it is an upstream component and will heavily influence the effectiveness of the subsequent auction mechanism. However, most existing ad selection methods regard ad selection as a relatively independent module, and only consider the literal or semantic matching between queries and keywords during the ad selection process. In this paper, we argue that this approach is not globally optimal. Our proposal is to formulate ad selection as such an optimization problem that the selected ads can work together with downstream components (e.g., the auction mechanism) to achieve the maximization of user clicks, advertiser social welfare, and search engine revenue (we call the combination of these ob jective functions as the marketplace ob jective for ease of reference). To this end, we 1) extract a bunch of features to represent each pair of query and keyword, and 2) train a machine learning model that maps the features to a binary variable indicating whether the keyword is selected or not, by maximizing the aforementioned marketplace ob jective. This formalization seems quite natural; however, it is technically difficult because the marketplace objective is non-convex, discontinuous, and indifferentiable regarding the model parameter due to the ranking and second-price rules in the auction mechanism. To tackle the challenge, we propose a probabilistic approximation of the marketplace objective, which is smooth and can be effectively optimized by conventional optimization techniques. We test the ad selection model learned with our proposed method using the sponsored search log from a commercial search engine. The experimental results show that our method can significantly outperform several ad selection algorithms on all the metrics under investigation.

  5. 76 FR 64083 - Reliability Technical Conference; Notice of Technical Conference

    Science.gov (United States)

    2011-10-17

    ... Energy Regulatory Commission Reliability Technical Conference; Notice of Technical Conference Take notice that the Federal Energy Regulatory Commission will hold a Technical Conference on Tuesday, November 29... addressing risks to reliability that were identified in earlier Commission technical conferences....

  6. Proceedings of the 1999 Review Conference on Fuel Cell Technology

    Energy Technology Data Exchange (ETDEWEB)

    None Available

    2000-06-05

    The 1999 Review Conference on Fuel Cell Technology was jointly sponsored by the U.S. Department of Energy, Federal Energy Technology Center (FETC), the Gas Research Institute (GRI), and the Electric Power Research Institute (EPRI). It was held August 3 to 5 in Chicago, Illinois. The goal of this conference was to provide a forum for reviewing fuel cell research and development (R&D) programs, assist in strategic R&D planning, promote awareness of sponsor activities, and enhance interactions between manufacturers, researchers, and stakeholders. This conference was attended by over 250 representatives from industry, academia, national laboratories, gas and electric utilities, DOE, and other Government agencies. The conference agenda included a keynote session, five presentation sessions, a poster presentation reception, and three breakout sessions. The presentation session topics were DOD Fuel Cell Applications, Low-Temperature Fuel Cell Manufacturers, Low-Temperature Component Research, High-Temperature Fuel Cell Manufacturers, and High-Temperature Component Research; the breakout session topics were Future R&D Directions for Low-Temperature Fuel Cells, Future R&D Directions for High-Temperature Fuel Cells, and a plenary summary session. All sessions were well attended.

  7. Fake/Bogus Conferences

    DEFF Research Database (Denmark)

    Asadi, Amin; Rahbar, Nader; Rezvani, Mohammad Javad

    2017-01-01

    The main objective of the present paper is to introduce some features of fake/bogus conferences and some viable approaches to differentiate them from the real ones. These fake/bogus conferences introduce themselves as international conferences, which are multidisciplinary and indexed in major sci...... scientific digital libraries. Furthermore, most of the fake/bogus conference holders offer publishing the accepted papers in ISI journals and use other techniques in their advertisement e-mails....

  8. The company's mainframes join CERN's openlab for DataGrid apps and are pivotal in a new $22 million Supercomputer in the U.K.

    CERN Multimedia

    2002-01-01

    Hewlett-Packard has installed a supercomputer system valued at more than $22 million at the Wellcome Trust Sanger Institute (WTSI) in the U.K. HP has also joined the CERN openlab for DataGrid applications (1 page).

  9. Research center Juelich to install Germany's most powerful supercomputer new IBM System for science and research will achieve 5.8 trillion computations per second

    CERN Multimedia

    2002-01-01

    "The Research Center Juelich, Germany, and IBM today announced that they have signed a contract for the delivery and installation of a new IBM supercomputer at the Central Institute for Applied Mathematics" (1/2 page).

  10. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    Science.gov (United States)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  11. SIAM conference on applications of dynamical systems. Abstracts and author index

    Energy Technology Data Exchange (ETDEWEB)

    1992-12-31

    A conference (Oct.15--19, 1992, Snowbird, Utah; sponsored by SIAM (Society for Industrial and Applied Mathematics) Activity Group on Dynamical Systems) was held that highlighted recent developments in applied dynamical systems. The main lectures and minisymposia covered theory about chaotic motion, applications in high energy physics and heart fibrillations, turbulent motion, Henon map and attractor, integrable problems in classical physics, pattern formation in chemical reactions, etc. The conference fostered an exchange between mathematicians working on theoretical issues of modern dynamical systems and applied scientists. This two-part document contains abstracts, conference program, and an author index.

  12. A Parallel Supercomputer Implementation of a Biological Inspired Neural Network and its use for Pattern Recognition

    Science.gov (United States)

    de Ladurantaye, Vincent; Lavoie, Jean; Bergeron, Jocelyn; Parenteau, Maxime; Lu, Huizhong; Pichevar, Ramin; Rouat, Jean

    2012-02-01

    A parallel implementation of a large spiking neural network is proposed and evaluated. The neural network implements the binding by synchrony process using the Oscillatory Dynamic Link Matcher (ODLM). Scalability, speed and performance are compared for 2 implementations: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) running on clusters of multicore supercomputers and NVIDIA graphical processing units respectively. A global spiking list that represents at each instant the state of the neural network is described. This list indexes each neuron that fires during the current simulation time so that the influence of their spikes are simultaneously processed on all computing units. Our implementation shows a good scalability for very large networks. A complex and large spiking neural network has been implemented in parallel with success, thus paving the road towards real-life applications based on networks of spiking neurons. MPI offers a better scalability than CUDA, while the CUDA implementation on a GeForce GTX 285 gives the best cost to performance ratio. When running the neural network on the GTX 285, the processing speed is comparable to the MPI implementation on RQCHP's Mammouth parallel with 64 notes (128 cores).

  13. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Helland, B. [Ames Lab., IA (United States); Summers, B.G. [Oak Ridge National Lab., TN (United States)

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. The data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.

  14. Massively-parallel electrical-conductivity imaging of hydrocarbonsusing the Blue Gene/L supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Commer, M.; Newman, G.A.; Carazzone, J.J.; Dickens, T.A.; Green,K.E.; Wahrmund, L.A.; Willen, D.E.; Shiu, J.

    2007-05-16

    Large-scale controlled source electromagnetic (CSEM)three-dimensional (3D) geophysical imaging is now receiving considerableattention for electrical conductivity mapping of potential offshore oiland gas reservoirs. To cope with the typically large computationalrequirements of the 3D CSEM imaging problem, our strategies exploitcomputational parallelism and optimized finite-difference meshing. Wereport on an imaging experiment, utilizing 32,768 tasks/processors on theIBM Watson Research Blue Gene/L (BG/L) supercomputer. Over a 24-hourperiod, we were able to image a large scale marine CSEM field data setthat previously required over four months of computing time ondistributed clusters utilizing 1024 tasks on an Infiniband fabric. Thetotal initial data misfit could be decreased by 67 percent within 72completed inversion iterations, indicating an electrically resistiveregion in the southern survey area below a depth of 1500 m below theseafloor. The major part of the residual misfit stems from transmitterparallel receiver components that have an offset from the transmittersail line (broadside configuration). Modeling confirms that improvedbroadside data fits can be achieved by considering anisotropic electricalconductivities. While delivering a satisfactory gross scale image for thedepths of interest, the experiment provides important evidence for thenecessity of discriminating between horizontal and verticalconductivities for maximally consistent 3D CSEM inversions.

  15. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Swaminarayan, Sriram [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Kadau, Kai [Los Alamos National Laboratory; Fossum, Gordon C [IBM CORPORATION

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  16. Cell-based Adaptive Mesh Refinement on the GPU with Applications to Exascale Supercomputing

    Science.gov (United States)

    Trujillo, Dennis; Robey, Robert; Davis, Neal; Nicholaeff, David

    2011-10-01

    We present an OpenCL implementation of a cell-based adaptive mesh refinement (AMR) scheme for the shallow water equations. The challenges associated with ensuring the locality of algorithm architecture to fully exploit the massive number of parallel threads on the GPU is discussed. This includes a proof of concept that a cell-based AMR code can be effectively implemented, even on a small scale, in the memory and threading model provided by OpenCL. Additionally, the program requires dynamic memory in order to properly implement the mesh; as this is not supported in the OpenCL 1.1 standard, a combination of CPU memory management and GPU computation effectively implements a dynamic memory allocation scheme. Load balancing is achieved through a new stencil-based implementation of a space-filling curve, eliminating the need for a complete recalculation of the indexing on the mesh. A cartesian grid hash table scheme to allow fast parallel neighbor accesses is also discussed. Finally, the relative speedup of the GPU-enabled AMR code is compared to the original serial version. We conclude that parallelization using the GPU provides significant speedup for typical numerical applications and is feasible for scientific applications in the next generation of supercomputing.

  17. Distributed computing as a virtual supercomputer: Tools to run and manage large-scale BOINC simulations

    Science.gov (United States)

    Giorgino, Toni; Harvey, M. J.; de Fabritiis, Gianni

    2010-08-01

    Distributed computing (DC) projects tackle large computational problems by exploiting the donated processing power of thousands of volunteered computers, connected through the Internet. To efficiently employ the computational resources of one of world's largest DC efforts, GPUGRID, the project scientists require tools that handle hundreds of thousands of tasks which run asynchronously and generate gigabytes of data every day. We describe RBoinc, an interface that allows computational scientists to embed the DC methodology into the daily work-flow of high-throughput experiments. By extending the Berkeley Open Infrastructure for Network Computing (BOINC), the leading open-source middleware for current DC projects, with mechanisms to submit and manage large-scale distributed computations from individual workstations, RBoinc turns distributed grids into cost-effective virtual resources that can be employed by researchers in work-flows similar to conventional supercomputers. The GPUGRID project is currently using RBoinc for all of its in silico experiments based on molecular dynamics methods, including the determination of binding free energies and free energy profiles in all-atom models of biomolecules.

  18. A user-friendly web portal for T-Coffee on supercomputers

    Directory of Open Access Journals (Sweden)

    Koetsier Jos

    2011-05-01

    Full Text Available Abstract Background Parallel T-Coffee (PTC was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  19. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan; Mills, Richard T.

    2012-04-18

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors per realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.

  20. A user-friendly web portal for T-Coffee on supercomputers.

    Science.gov (United States)

    Rius, Josep; Cores, Fernando; Solsona, Francesc; van Hemert, Jano I; Koetsier, Jos; Notredame, Cedric

    2011-05-12

    Parallel T-Coffee (PTC) was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  1. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  2. Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society

    Science.gov (United States)

    Kennedy, J. A.; Kluth, S.; Mazzaferro, L.; Walker, Rodney

    2015-12-01

    The possible usage of HPC resources by ATLAS is now becoming viable due to the changing nature of these systems and it is also very attractive due to the need for increasing amounts of simulated data. In recent years the architecture of HPC systems has evolved, moving away from specialized monolithic systems, to a more generic linux type platform. This change means that the deployment of non HPC specific codes has become much easier. The timing of this evolution perfectly suits the needs of ATLAS and opens a new window of opportunity. The ATLAS experiment at CERN will begin a period of high luminosity data taking in 2015. This high luminosity phase will be accompanied by a need for increasing amounts of simulated data which is expected to exceed the capabilities of the current Grid infrastructure. ATLAS aims to address this need by opportunistically accessing resources such as cloud and HPC systems. This paper presents the results of a pilot project undertaken by ATLAS and the MPP/RZG to provide access to the HYDRA supercomputer facility. Hydra is the supercomputer of the Max Planck Society, it is a linux based supercomputer with over 80000 cores and 4000 physical nodes located at the RZG near Munich. This paper describes the work undertaken to integrate Hydra into the ATLAS production system by using the Nordugrid ARC-CE and other standard Grid components. The customization of these components and the strategies for HPC usage are discussed as well as possibilities for future directions.

  3. Conference on Refrigeration for Cryogenic Sensors and Electronic Systems

    CERN Document Server

    Sullivan, D B; McCarthy, S E; Cryogenic Refrigeration Conference; International Cryocooler Conference; Cryocoolers 1

    1981-01-01

    This proceedings documents the output of a meeting of refrigeration specialists held at the National Bureau of Standards, Boulder, CO, on October 6 and 7, 1980. Building on an earlier invitation-only meeting in 1977, the purpose of this first open meeting was to discuss progress in the development of refrigeration systems to cool cryogenic sensors and electronic systems in the temperature range below 20 K and with required cooling capacities below 10 W. The meeting was jointly sponsored by the International Institute of Refrigeration - Commission A1/2, the Office of Naval Research, the Naval Research Laboratory, the Cryogenic Engineering Conference, and the National Bureau of Standards. This first open cryocooler conference consisted of 23 papers presented by representatives of industry, government, and academia. The conference proceedings reproduced here was published by the National Bureau of Standards in Boulder, Colorado as NBS Special Publication #607. Subsequent meetings would become known as the Intern...

  4. 2nd Conference of the International Society for Nonparametric Statistics

    CERN Document Server

    Manteiga, Wenceslao; Romo, Juan

    2016-01-01

    This volume collects selected, peer-reviewed contributions from the 2nd Conference of the International Society for Nonparametric Statistics (ISNPS), held in Cádiz (Spain) between June 11–16 2014, and sponsored by the American Statistical Association, the Institute of Mathematical Statistics, the Bernoulli Society for Mathematical Statistics and Probability, the Journal of Nonparametric Statistics and Universidad Carlos III de Madrid. The 15 articles are a representative sample of the 336 contributed papers presented at the conference. They cover topics such as high-dimensional data modelling, inference for stochastic processes and for dependent data, nonparametric and goodness-of-fit testing, nonparametric curve estimation, object-oriented data analysis, and semiparametric inference. The aim of the ISNPS 2014 conference was to bring together recent advances and trends in several areas of nonparametric statistics in order to facilitate the exchange of research ideas, promote collaboration among researchers...

  5. International Conference on Physics

    CERN Document Server

    2016-01-01

    OMICS International, (conference series) the World Class Open Access Publisher and Scientific Event Organizer is hosting “International Conference on physics” which is going to be the biggest conference dedicated to Physics. The theme “Highlighting innovations and challenges in the field of Physics” and it features a three day conference addressing the major breakthroughs, challenges and the solutions adopted. The conference will be held during June 27-29, 2016 at New Orleans, USA. Will be published in: http://physics.conferenceseries.com/

  6. 44 CFR 208.9 - Agreements between Sponsoring Agencies and Participating Agencies.

    Science.gov (United States)

    2010-10-01

    .... Every agreement between a Sponsoring Agency and a Participating Agency regarding the System must include... EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY DISASTER ASSISTANCE NATIONAL URBAN SEARCH AND...

  7. International Conference on Climate Change Adaptation Assessments: Conference summary and statement

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-08-01

    The International Conference on Climate Change Adaptation Assessments was held in St. Petersburg, Russian Federation, from May 22--25, 1995. Sponsored by the Russian Federal Service for Hydrometeorology and Environmental Monitoring, the US Country Studies Program, and the directorate General for International Cooperation of the Netherlands Government, it was the first international conference focusing exclusively on adaptation to climate change. More than 100 people from 29 countries on five continents participated. The conference primarily addressed measures to anticipate the potential effects of climate change to minimize negative effects and take advantage of any positive effects. The focus was on what governments, institutions, and individuals can do to prepare for climate change. The conference dealt with two major topics: What adaptation options are most effective and efficient in anticipating climate change and what methods should be used to assess the effectiveness and efficiency of adaptation options. Brief summaries are given from the following sessions on agriculture; Water resources; coastal resources; ecosystems and forests; fisheries; human settlements; water and agriculture; and the panel session on international adaptation in national communications and other development plans and needs for technical assistance.

  8. Facilitating Learning at Conferences

    DEFF Research Database (Denmark)

    Ravn, Ib; Elsborg, Steen

    2011-01-01

    and facilitate a variety of simple learning techniques at thirty one- and two-day conferences of up to 300 participants each. We present ten of these techniques and data evaluating them. We conclude that if conference organizers allocate a fraction of the total conference time to facilitated processes......The typical conference consists of a series of PowerPoint presentations that tend to render participants passive. Students of learning have long abandoned the transfer model that underlies such one-way communication. We propose an al-ternative theory of conferences that sees them as a forum...... for learning, mutual inspiration and human flourishing. We offer five design principles that specify how conferences may engage participants more and hence increase their learning. In the research-and-development effort reported here, our team collaborated with conference organizers in Denmark to introduce...

  9. The learning conference

    DEFF Research Database (Denmark)

    Ravn, Ib

    2007-01-01

    Purpose: To call attention to the fact that conferences for professionals rely on massive one-way communication and hence produce little learning for delegates. To introduce an alternative, the ?learning conference,? that involves delegates in fun and productive learning processes. Design....../methodology/approach: A typical full-day conference is analyzed. It has six hours of podium talk and twenty-five minutes for delegates to become involved. What model of learning can possibly lie behind this? The transfer model, which assumes learners to be empty vessels. An alternative view is that conference delegates...... are active professionals in search of inspiration, and they also want to share knowledge with their peers at the conference. A theory of the conference as a forum for mutual inspiration and human co-flourishing is proposed, as are four design principles for a learning conference: 1. Presentations must...

  10. PREFACE: XVII International Scientific Conference ''RESHETNEV READINGS''

    Science.gov (United States)

    2015-01-01

    The International Scientific Conference ''RESHETNEV READINGS'' is dedicated to the memory of Mikhail Reshetnev, an outstanding scientist, chief-constructor of space-rocket systems and communication satellites. The current volume represents selected proceedings of the main conference materials which were published by XVII International Scientific Conference ''RESHETNEV READINGS'' held on November 12 - 14, 2013. Plenary sessions, round tables and forums will be attended by famous scientists, developers and designers representing the space technology sector, as well as professionals and experts in the IT industry. A number of outstanding academic figures expressed their interest in an event of such a level including Jaures Alferov, Vice-president of the Russian Academy of Sciences (RAS), Academician of RAS, Nobel laureate, Dirk Bochar, General Secretary of the European Federation of National Engineering Associates (FEANI), Prof. Yuri Gulyaev, Academician of RAS, Member of the Presidium of RAS, President of the International Union of Scientific and Engineering Associations, Director of the Institute of Radio Engineering and Electronics of RAS, as well as rectors of the largest universities in Russia, chief executives of well-known research enterprises and representatives of big businesses. We would like to thank our main sponsors such as JSC ''Reshetnev Information Satellite Systems'', JSC ''Krasnoyarsk Engineering Plant'', Central Design Bureau ''Geophysics'', Krasnoyarsk Region Authorities. These enterprises and companies are leading ones in the aerospace branch. It is a great pleasure to cooperate and train specialists for them.

  11. Fossil Energy Materials Program conference proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Judkins, R.R. (comp.)

    1987-08-01

    The US Department of Energy Office of Fossil Energy has recognized the need for materials research and development to assure the adequacy of materials of construction for advanced fossil energy systems. The principal responsibility for identifying needed materials research and for establishing a program to address these needs resides within the Office of Technical Coordination. That office has established the Advanced Research and Technology Development (AR and TD) Fossil Energy Materials Program to fulfill that responsibility. In addition to the AR and TD Materials Program, which is designed to address in a generic way the materials needs of fossil energy systems, specific materials support activities are also sponsored by the various line organizations such as the Office of Coal Gasification. A conference was held at Oak Ridge, Tennessee on May 19-21, 1987, to present and discuss the results of program activities during the past year. The conference program was organized in accordance with the research thrust areas we have established. These research thrust areas include structural ceramics (particularly fiber-reinforced ceramic composites), corrosion and erosion, and alloy development and mechanical properties. Eighty-six people attended the conference. Papers have been entered individually into EDB and ERA. (LTN)

  12. Conference on Convex Analysis and Global Optimization

    CERN Document Server

    Pardalos, Panos

    2001-01-01

    There has been much recent progress in global optimization algo­ rithms for nonconvex continuous and discrete problems from both a theoretical and a practical perspective. Convex analysis plays a fun­ damental role in the analysis and development of global optimization algorithms. This is due essentially to the fact that virtually all noncon­ vex optimization problems can be described using differences of convex functions and differences of convex sets. A conference on Convex Analysis and Global Optimization was held during June 5 -9, 2000 at Pythagorion, Samos, Greece. The conference was honoring the memory of C. Caratheodory (1873-1950) and was en­ dorsed by the Mathematical Programming Society (MPS) and by the Society for Industrial and Applied Mathematics (SIAM) Activity Group in Optimization. The conference was sponsored by the European Union (through the EPEAEK program), the Department of Mathematics of the Aegean University and the Center for Applied Optimization of the University of Florida, by th...

  13. Strategic Bidding Behaviors in Nondecreasing Sponsored Search Auctions

    Directory of Open Access Journals (Sweden)

    Chen-Kun Tsung

    2013-01-01

    Full Text Available To realize the specific results in the sponsored search auctions, most advertisers submit particular bid prices. The bidding behaviors with specific purposes are called as the strategic bidding. However, some strategic bidding behaviors will result in negative effects, such as the elimination of the equilibrium and the payment increase for some advertisers. The bidding behaviors with negative results are termed as the vindictive bidding. We survey four strategic bidding behaviors which include a rational bidding and three vindictive bidding strategies. In this paper, we study the relationship between the effects resulted by the vindictive bidding and the valuations of the vindictive advertisers. In our experiments, the search engine provider (SEP is benefited by all vindictive bidding behaviors, and the increment of the SEP's revenue is proportional to the degree of the vindictiveness. Bidding vindictively without sacrificing the own utility improves the advertiser's utility with high probability. Moreover, we observe that the SEP's revenue is improved by the following situations. First, the vindictive advertiser with low valuation in the keywords with high market value results in more SEP's revenue than that in the keywords with low market value. The second case is to raise the bidding competition between advertisers.

  14. For-profit mediators in sponsored search advertising

    CERN Document Server

    Singh, Sudhir K; Gunadhi, Himawan; Rezaei, Behnam A

    2007-01-01

    A mediator is a well-known construct in game theory, and is an entity that plays on behalf of some of the agents who choose to use its services, while the rest of the agents participate in the game directly. We initiate a game theoretic study of sponsored search auctions, such as those used by Google and Yahoo!, involving {\\em incentive driven} mediators. We refer to such mediators as {\\em for-profit} mediators, so as to distinguish them from mediators introduced in prior work, who have no monetary incentives, and are driven by the altruistic goal of implementing certain desired outcomes. We show that in our model, (i) players/advertisers can improve their payoffs by choosing to use the services of the mediator, compared to directly participating in the auction; (ii) the mediator can obtain monetary benefit by managing the advertising burden of its group of advertisers; and (iii) the payoffs of the mediator and the advertisers it plays for are incentive compatible with the advertisers who do dot use its servi...

  15. Disclosure of competing financial interests and role of sponsors in phase III cancer trials.

    Science.gov (United States)

    Tuech, Jean-Jacques; Moutel, Grégoire; Pessaux, Patrick; Thoma, Véronique; Schraub, Simon; Herve, Christian

    2005-10-01

    Financial relationships between industry, researchers and academic institutions are becoming increasingly complex, raising concern about sponsors' involvement in the conduct of biomedical research. A review of published randomised trials (RCTs) in cancer research was performed to assess adherence to the 1997 disclosure requirements and to document the nature of the disclosed interests. Source(s) of study support, author-sponsor relationships and the role of the study sponsor were assessed for all RCTs published between 1999 and 2003 in 12 international journals. A total of 655 cancer RCTs were identified. Of these, 516 (78.8%) disclosed the source of sponsorship. The nature of the relationship between the authors and the study sponsor was included in 219 of the 227 industry-sponsored studies. The most commonly cited relationships were (131 studies had multiple relations): grants (93.6%); employment (39.2%); consultant/honorarium (12.7%) and stock ownership and participation in a speaker's bureau (12, 5.5% each). Only 41 (18%) of the 227 industry-sponsored RCTs reported the role of the sponsor. Of these, 20 explicitly stated that the sponsor had no role in the study. Twenty-one papers described the sponsor's role, the degree of sponsor involvement was variable and usually described vaguely. Among these papers, four stated that researchers had full access to all data, one that the researchers had no limits on publication and one that 'the decision to submit the paper for publication was determined by the study sponsor'. In conclusion, no researcher should be expected to produce 'findings' without full access to the data, freedom from interference in analysis and interpretation and liberty to publish all results, however disappointing to the stakeholder they may be. In the meantime, researchers do well to arm themselves with the rules for research partnerships and editors to take on the role of watchdog.

  16. PREFACE AND CONFERENCE INFORMATION: Eighth International Conference on Laser Ablation

    Science.gov (United States)

    Hess, Wayne P.; Herman, Peter R.; Bäuerle, Dieter; Koinuma, Hideomi

    2007-04-01

    enjoy the collection of papers in this proceeding. Also, please join us for COLA 2007, to be held in the Canary Islands, Spain (http://www.io.csic.es/cola07/index.php). Conference on Laser Ablation (COLA'05) September 11-16, 2005 Banff, Canada Supported by University of Toronto, Canada (UT) Pacific Northwest National Laboratory (PNNL) Sponsors Sponsorship from the following companies is gratefully acknowledged and appreciated AMBP Tech Corporation GSI Lumonics Amplitude Systèmes IMRA America, Inc. Andor Technologies Journal of Physics D: Applied Physics North Canadian Institute for Photonic Innovations LUMERA LASER GmbH Clark-MXR, Inc. Pascal Coherent, Lamdbda Physik, TuiLaser PVD Products, Inc. Continuum Staib Instruments, Inc. Cyber Laser Inc. Surface GAM LASER, Inc. International Steering Committee C. Afonso (Spain)W. Husinsky (Austria) D. Bäuerle (Austria)W. Kautek (Germany) I.W. Boyd (UK) H. Koinuma (Japan) E.B. Campbell (Sweden) H.U. Krebs (Germany) J.T. Dickinson (USA) D.H. Lowndes (USA) M. Dinescu (Romania) J.G. Lunney (Ireland) J.J. Dubowski (Canada) W. Marine (France) E. Fogarassy (France) K. Murakami (Japan) C. Fotakis (Greece) T. Okada (Japan) D. Geohegan (USA) R.E. Russo (USA) M. Gower (UK) J. Schou (Denmark) R.H. Haglund Jr. (USA) M. Stuke (Germany) R.R. Herman (Canada) K. Sugioka (Japan) W.P. Hess (USA) F. Traeger (Germany) J.S Horwitz (USA) A. Yabe (Japan) Local Organizing Committee Nikki Avery Pacific Northwest National Laboratory Ken Beck Pacific Northwest National Laboratory Jan J. Dubowski University of Alberta Robert Fedosejevs Université de Sherbrooke Alan Joly Pacific Northwest National Laboratory Michel Meunier École Polytechnique de Montréal Suwas Nikumb National Research Council Canada Ying Tsui University of Alberta Conference photograph.

  17. Influence of Earth crust composition on continental collision style in Precambrian conditions: Results of supercomputer modelling

    Science.gov (United States)

    Zavyalov, Sergey; Zakharov, Vladimir

    2016-04-01

    A number of issues concerning Precambrian geodynamics still remain unsolved because of uncertainity of many physical (thermal regime, lithosphere thickness, crust thickness, etc.) and chemical (mantle composition, crust composition) parameters, which differed considerably comparing to the present day values. In this work, we show results of numerical supercomputations based on petrological and thermomechanical 2D model, which simulates the process of collision between two continental plates, each 80-160 km thick, with various convergence rates ranging from 5 to 15 cm/year. In the model, the upper mantle temperature is 150-200 ⁰C higher than the modern value, while the continental crust radiogenic heat production is higher than the present value by the factor of 1.5. These settings correspond to Archean conditions. The present study investigates the dependence of collision style on various continental crust parameters, especially on crust composition. The 3 following archetypal settings of continental crust composition are examined: 1) completely felsic continental crust; 2) basic lower crust and felsic upper crust; 3) basic upper crust and felsic lower crust (hereinafter referred to as inverted crust). Modeling results show that collision with completely felsic crust is unlikely. In the case of basic lower crust, a continental subduction and subsequent continental rocks exhumation can take place. Therefore, formation of ultra-high pressure metamorphic rocks is possible. Continental subduction also occurs in the case of inverted continental crust. However, in the latter case, the exhumation of felsic rocks is blocked by upper basic layer and their subsequent interaction depends on their volume ratio. Thus, if the total inverted crust thickness is about 15 km and the thicknesses of the two layers are equal, felsic rocks cannot be exhumed. If the total thickness is 30 to 40 km and that of the felsic layer is 20 to 25 km, it breaks through the basic layer leading to

  18. Proceedings: 1989 conference on advanced computer technology for the power industry

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, B. (ed.)

    1990-07-01

    An EPRI conference to address advanced computer technology was hosted by Arizona Public Service in Scottsdale, Arizona, December 4--6, 1989. Participants represented US and foreign utilities, major electric and computer industry vendors, R D contractors, and consulting firms. These proceedings contain the text of the technical presentations and summaries of the panel discussions. The conference objectives were: to assess modern computer technologies and how they will affect utility operations; to share US and foreign utility experiences in developing computer-based technical products; and to discuss research conducted by EPRI in advanced computer technology on behalf of its utility members. Technical presentations addressed a broad range of computer-related topics: computer-based training; engineering workshops; hypermeida and other advanced user interfaces; networks and communications; expert systems and other decision-support methodologies; intelligent database management; supercomputing architectures and applications; real-time data processing; computerized technology and information transfer; and neural networks and other emerging technologies.

  19. International Conference on Applied Sciences (ICAS2013)

    Science.gov (United States)

    Lemle, Ludovic Dan; Jiang, Yiwen

    2014-03-01

    present new researches in the various fields of materials engineering, mechanical engineering, computers engineering, mathematical engineering and clinical engineering. It's our great pleasure to present this volume of IOP Conference Series: Materials Science and Engineering to the scientific community to promote further researches in these areas. We sincerely hope that the papers published in this volume will contribute to the advancement of knowledge in the respective fields. All papers published in this volume of IOP Conference Series: Materials Science and Engineering (MSE) have been peer reviewed through processes administered by the editors of the ICAS2013 proceedings, Ludovic Dan Lemle and Yiwen Jiang. Special thanks should be directed to the organizing committee for their tremendous efforts in organizing the conference: General Chair Zhou Laixin, Military Economics Academy of Wuhan Co-chairs Du Qifa, Military Economics Academy of Wuhan Serban Viorel-Aurel, ''Politehnica'' University of Timişoara Fen Youmei, Wuhan University Lin Pinghua, Huazhong University of Science and Technology Members Lin Darong, Military Economics Academy of Wuhan Guo Zhonghou, Military Economics Academy of Wuhan Sun Honghong, Military Economics Academy of Wuhan Liu Dong, Military Economics Academy of Wuhan We thank the authors for their contributions and we would also like to express our gratitude everyone who contributed to this conference, especially for the generous support of the sponsor: micromega S C Micro-Mega HD S A Ludovic Dan Lemle and Yiwen Jiang Coordinators of the Scientific Committee of ICAS2013 Deatails of organizers and members of the scientific commmittee are available in the PDF

  20. EU-sponsored photovoltaic systems for rural electrification

    Energy Technology Data Exchange (ETDEWEB)

    Riesch, Gerhard [Joint Research Centre of the European Union, JRC, Ispra (Italy)

    1995-12-31

    Development and proliferation of renewable energies are sponsored since 1983 by the European Union, normally up to 40% of the cost. (Programme THERMIE and predecessors). In the frame of this programme for more than one hundred projects of all kinds with thousands of photovoltaic energy supply systems have been implemented in Europe, 29 of these projects with 939 single pv-systems concern electrification of rural sites (e.g. agriculture) or isolated sites (e.g. mountain huts). Most of the single systems are of small size, 50 to 1000 Wp. A few of the systems are larger, up to 25 kWp, and supply local isolated mini-grids. In this paper the main features of the systems in six european countries are presented: The technical, economical and social results as well as the contributions of the Electric Power Utility (EPU`s) to these electrification are discussed. [Espanol] Desde 1983 la Union Europea ha auspiciado normalmente hasta el 40% del costo del desarrollo y proliferacion de las energias renovables. (Programa THERMIE y predecesores). En el marco de este programa con mas de cien proyectos de todos tipos, con miles de sistemas fotovoltaicos de suministro de energia, han sido implantados en Europa, 29 de estos proyectos con 929 sistemas fotovoltaicos sencillos se relacionan con la electrificacion de sitios rurales (por ejemplo agricultura) o de sitios aislados (por ejemplo cabanas en la montana). La mayoria de los sistemas sencillos son de pequeno tamano, 50 a 1000 Wp. Unos pocos de los sistemas son mas grandes, hasta de 25 kWp y alimentan mini-redes locales aisladas. En este articulo se presentan las caracteristicas principales de los sistemas en seis paises europeos: se analizan los resultados tecnicos, economicos y sociales, asi como las contribuciones de las empresas electricas.

  1. Corporate sponsored education initiatives on board the ISS

    Science.gov (United States)

    Durham, Ian T.; Durham, Alyson S.; Pawelczyk, James A.; Brod, Lawrence B.; Durham, Thomas F.

    1999-01-01

    This paper proposes the creation of a corporate sponsored ``Lecture from Space'' program on board the International Space Station (ISS) with funding coming from a host of new technology and marketing spin-offs. This program would meld existing education initiatives in NASA with new corporate marketing techniques. Astronauts in residence on board the ISS would conduct short ten to fifteen minute live presentations and/or conduct interactive discussions carried out by a teacher in the classroom. This concept is similar to a program already carried out during the Neurolab mission on Shuttle flight STS-90. Building on that concept, the interactive simulcasts would be broadcast over the Internet and linked directly to computers and televisions in classrooms worldwide. In addition to the live broadcasts, educational programs and demonstrations can be recorded in space, and marketed and sold for inclusion in television programs, computer software, and other forms of media. Programs can be distributed directly into classrooms as an additional presentation supplement, as well as over the Internet or through cable and broadcast television, similar to the Canadian Discovery Channel's broadcasts of the Neurolab mission. Successful marketing and advertisement can eventually lead to the creation of an entirely new, privately run cottage industry involving the distribution and sale of educationally related material associated with the ISS that would have the potential to become truly global in scope. By targeting areas of expertise and research interest in microgravity, a large curriculum could be developed using space exploration as a unifying theme. Expansion of this concept could enhance objectives already initiated through the International Space University to include elementary and secondary school students. The ultimate goal would be to stimulate interest in space and space related sciences in today's youth through creative educational marketing initiatives while at the

  2. The Emmanuel Schools Foundation: Sponsoring and Leading Transformation at England's Most Improved Academy

    Science.gov (United States)

    Pike, Mark A.

    2009-01-01

    The Emmanuel Schools Foundation (ESF) has so far sponsored four schools in England. Beginning with Emmanuel College in Gateshead in 1990 (which remains a City Technology College) the Foundation sponsors the King's Academy in Middlesbrough, which opened in 2003, and Trinity Academy in Thorne near Doncaster, which opened in 2005. The Foundation's…

  3. 14 CFR 1214.306 - Payload specialist relationship with sponsoring institutions.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Payload specialist relationship with sponsoring institutions. 1214.306 Section 1214.306 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE... specialist relationship with sponsoring institutions. Specialists who are not U.S. Government employees must...

  4. 14 CFR 152.105 - Sponsors and planning agencies: Airport planning.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Sponsors and planning agencies: Airport planning. 152.105 Section 152.105 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF....105 Sponsors and planning agencies: Airport planning. (a) To be eligible to apply for a project for...

  5. A Case Study of Teaching Marketing Research Using Client-Sponsored Projects: Method, Challenges, and Benefits

    Science.gov (United States)

    Bove, Liliana L.; Davies, W. Martin

    2009-01-01

    This case study outlines the use of client-sponsored research projects in a quantitative postgraduate marketing research subject conducted in a 12-week semester in a research-intensive Australian university. The case study attempts to address the dearth of recent literature on client-sponsored research projects in the discipline of marketing.…

  6. 42 CFR 403.822 - Reimbursement of transitional assistance and associated sponsor requirements.

    Science.gov (United States)

    2010-10-01

    ... associated sponsor requirements. 403.822 Section 403.822 Public Health CENTERS FOR MEDICARE & MEDICAID... Prescription Drug Discount Card and Transitional Assistance Program § 403.822 Reimbursement of transitional... in § 403.808. (c) Endorsed sponsors must routinely account to CMS for the transitional...

  7. A Case Study of Teaching Marketing Research Using Client-Sponsored Projects: Method, Challenges, and Benefits

    Science.gov (United States)

    Bove, Liliana L.; Davies, W. Martin

    2009-01-01

    This case study outlines the use of client-sponsored research projects in a quantitative postgraduate marketing research subject conducted in a 12-week semester in a research-intensive Australian university. The case study attempts to address the dearth of recent literature on client-sponsored research projects in the discipline of marketing.…

  8. 20 CFR 416.1204 - Deeming of resources of the sponsor of an alien.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Deeming of resources of the sponsor of an alien. 416.1204 Section 416.1204 Employees' Benefits SOCIAL SECURITY ADMINISTRATION SUPPLEMENTAL SECURITY INCOME FOR THE AGED, BLIND, AND DISABLED Resources and Exclusions § 416.1204 Deeming of resources of the sponsor of an alien. The resources of...

  9. 42 CFR 440.350 - Employer-sponsored insurance health plans.

    Science.gov (United States)

    2010-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.350 Employer-sponsored insurance health plans. (a) A State may provide benchmark or benchmark-equivalent coverage by obtaining employer sponsored health plans (either alone...

  10. 78 FR 44432 - New Animal Drugs; Change of Sponsor; Fentanyl; Iron Injection

    Science.gov (United States)

    2013-07-24

    ... sponsor for two approved new animal drug applications (NADAs) from Alstoe, Ltd., Animal Health, to Sogeval S.A., and a change of sponsor for an NADA from Nexcyon Pharmaceuticals, Inc. to Elanco Animal Health..., NADA 099-667 for IMPOSIL (iron dextran complex) Injection and NADA 110-399 for GLEPTOSIL...

  11. PREFACE: 1982 International Conference on Plasma Physics

    Science.gov (United States)

    Wilhelmsson, Hans

    1982-01-01

    second one to (3) Fusion and (4) Laboratory Plasmas. The 1982 International Conference on Plasma Physics was organized by Chalmers University of Technology. It gathered about 500 participants from 40 countries. Large delegations came from the USA, France, West Germany, Japan, the USSR, and India, the number of participants from these countries ranging from 100 to 20. Sweden had about 50 participating scientists. There were a total of about 20 from the other Scandinavian countries. The principal sponsor of the conference was IUPAP, the International Union of Pure and Applied Physics. The conference also had a number of co-sponsors like IAU, the International Astronomical Union, URSI, the International Union of Radio Science, EPS, the European Physical Society, and EURATOM-FUSION. The conference was supported by Swedish Industry and Swedish Research Boards. The previous ICPP, held in Nagoya two years ago, was the first attempt to combine two types of conferences: the Plasma Theory Conference, first held in Kiev in the Soviet Union in 1971, and the Waves and Instabilities Congress, held for the first time in Innsbruck, Austria in 1973. As a consequence of the success of the Nagoya conference it was decided by the International Organizing Committee of the ICPP that the 1982 conference should also be of the combined type. The 1982 ICPP in Göteborg was thus a Joint Conference of the Fifth Kiev International Conference in Plasma Theory and the Fifth International Congress on Waves and Instabilities in Plasmas. During the conference in Göteborg the International Organizing Committee had a meeting and it was decided that also the next International Conference on Plasma Physics will be of the combined type. It will be held in Lausanne, Switzerland in 1984. The International Organizing Committee on the 1982 International Conference on Plasma Physics comprised about 40 plasma physics scientists from all over the world, who represented various sections of plasma physics. I would

  12. International Cryocooler Conference

    CERN Document Server

    Cryocoolers 13

    2005-01-01

    This is the 13th volume in the conference series. Over the years the International Cryocoolers Conference has become the preeminent worldwide conference for the presentation of the latest developments and test experiences with cryocoolers. The typical applications of this technology include cooling space and terrestrial infrared focal plane arrays, space x-ray detectors, medical applications, and a growing number of high-temperature super-capacitor applications.

  13. Expectations for Cancun Conference

    Institute of Scientific and Technical Information of China (English)

    DING ZHITAO

    2010-01-01

    Compared with the great hopes raised by the Copenhagen Climate Conference in 2009, the 2010 UN Climate Change Conference in Cancun aroused fewer expectations. However, the international community is still waiting for a positive outcome that will benefit humankind as a whole. The Cancun conference is another important opportunity for all the participants to advance the Bali Road Map negotiations after last year's meeting in Copenhagen, which failed to reach a legally binding treaty for the years beyond 2012.

  14. Chair's Introduction to 2009 IEEE Circuits and Systems International Conference on Testing and Diagnosis

    Institute of Scientific and Technical Information of China (English)

    Rueywen Liu

    2009-01-01

    @@ Based on the recommendation of ICTD'09 TPC members, this Special Issue of the Journal of Electronic Science & Technology of China (JESTC) contained 22 high quality papers selected from the Proceedings of 2009 IEEE Circuits and Systems International Conference on Testing and Diagnosis (ICTD'09) which is fully sponsored by the IEEE Circuits and Systems Society (CASS), and is technically co-sponsored by the University of Electronic Science and Technology of China (UESTC), the Chinese Institute of Electronics (CIE), the China Instrument & Control Society (CIS), and organized by UESTC.

  15. Conference proceedings ISES 2014

    DEFF Research Database (Denmark)

    Christensen, Janne Winther; Peerstrup Ahrendt, Line; Malmkvist, Jens

    The 10th Internatinal Equitation Science Conference is held i Denmark from August 6th - 9th 2014. This book of proceedings contaions abstracts of 35 oral and 57 poster presentations within the conference themes Equine Stress, Learning and Training as well as free papers.......The 10th Internatinal Equitation Science Conference is held i Denmark from August 6th - 9th 2014. This book of proceedings contaions abstracts of 35 oral and 57 poster presentations within the conference themes Equine Stress, Learning and Training as well as free papers....

  16. The PVM (Parallel Virtual Machine) system: Supercomputer level concurrent computation on a network of IBM RS/6000 power stations

    Energy Technology Data Exchange (ETDEWEB)

    Sunderam, V.S. (Emory Univ., Atlanta, GA (USA). Dept. of Mathematics and Computer Science); Geist, G.A. (Oak Ridge National Lab., TN (USA))

    1991-01-01

    The PVM (Parallel Virtual Machine) system enables supercomputer level concurrent computations to be performed on interconnected networks of heterogeneous computer systems. Specifically, a network of 13 IBM RS/6000 powerstations has been successfully used to execute production quality runs of superconductor modeling codes at more than 250 Mflops. This work demonstrates the effectiveness of cooperative concurrent processing for high performance applications, and shows that supercomputer level computations may be attained at a fraction of the cost on distributed computing platforms. This paper describes the PVM programming environment and user facilities, as they apply to hardware platforms comprising a network of IBM RS/6000 powerstations. The salient design features of PVM will be discussed; including heterogeneity, scalability, multilanguage support, provisions for fault tolerance, the use of multiprocessors and scalar machines, an interactive graphical front end, and support for profiling, tracing, and visual analysis. The PVM system has been used extensively, and a range of production quality concurrent applications have been successfully executed using PVM on a variety of networked platforms. The paper will mention representative examples, and discuss two in detail. The first is a material sciences problem that was originally developed on a Cray 2. This application code calculates the electronic structure of metallic alloys from first principles and is based on the KKR-CPA algorithm. The second is a molecular dynamics simulation for calculating materials properties. Performance results for both applicants on networks of RS/6000 powerstations will be presented, and accompanied by discussions of the other advantages of PVM and its potential as a complement or alternative to conventional supercomputers.

  17. Calculation of Free Energy Landscape in Multi-Dimensions with Hamiltonian-Exchange Umbrella Sampling on Petascale Supercomputer.

    Science.gov (United States)

    Jiang, Wei; Luo, Yun; Maragliano, Luca; Roux, Benoît

    2012-11-13

    An extremely scalable computational strategy is described for calculations of the potential of mean force (PMF) in multidimensions on massively distributed supercomputers. The approach involves coupling thousands of umbrella sampling (US) simulation windows distributed to cover the space of order parameters with a Hamiltonian molecular dynamics replica-exchange (H-REMD) algorithm to enhance the sampling of each simulation. In the present application, US/H-REMD is carried out in a two-dimensional (2D) space and exchanges are attempted alternatively along the two axes corresponding to the two order parameters. The US/H-REMD strategy is implemented on the basis of parallel/parallel multiple copy protocol at the MPI level, and therefore can fully exploit computing power of large-scale supercomputers. Here the novel technique is illustrated using the leadership supercomputer IBM Blue Gene/P with an application to a typical biomolecular calculation of general interest, namely the binding of calcium ions to the small protein Calbindin D9k. The free energy landscape associated with two order parameters, the distance between the ion and its binding pocket and the root-mean-square deviation (rmsd) of the binding pocket relative the crystal structure, was calculated using the US/H-REMD method. The results are then used to estimate the absolute binding free energy of calcium ion to Calbindin D9k. The tests demonstrate that the 2D US/H-REMD scheme greatly accelerates the configurational sampling of the binding pocket, thereby improving the convergence of the potential of mean force calculation.

  18. Battered Children and Child Abuse. Highlights and Recommendations of the CIOMS/WHO Conference (Berne, Switzerland, December 1985).

    Science.gov (United States)

    Bankowski, Z., Ed.; Carballo, M., Ed.

    This document provides highlights and recommendations of a conference on battered children and child abuse sponsored by the Council for International Organizations of Medical Sciences (CIOMS) and the World Health Organization (WHO). In a discussion of the nature of the child abuse problem, the history of child maltreatment is briefly reviewed and…

  19. Second International Conference on Near-Field Optical Analysis: Photodynamic Therapy and Photobiology Effects

    Science.gov (United States)

    Bulgher, Debra L. (Editor); Morrison, Dennis

    2002-01-01

    The International NASA/DARPA Photobiology Conference held at the Johnson Space Center in Houston/TX demonstrated where low level laser therapy (LLLT), respectively low intensity light activated biostimulation (LILAB) and nanotechnological applications employing photobiomodulation techniques will presumably go in the next ten years. The conference was a continuation of the 1st International Conference on Nearfield Optical Analysis organized by Andrei Sommer (ENSOMA Lab, University of Ulm, Germany) in November 2000 at Castle Reisenburg, Germany, which started with a group of ten scientists from eight different countries. The 1st conference was co-sponsored by the American Chemical Society to evaluate the molecular mechanism of accelerated and normal wound healing processes. The 2nd conference was co-sponsored by DARPA, NASA-JSC and the Medical College of Wisconsin. Despite the short time between events, the 2nd conference hosted 40 international experts form universities, research institutes, agencies and the industry. The materials published here are expected to become milestones forming a novel platform in biomedical photobiology. The multidisciplinary group of researchers focused on LLLT/LILAB-applications under extreme conditions expected to have beneficial effects particularly in space, on submarines, and under severe battlefield conditions. The group also focused on novel technologies with possibilities allowing investigating the interaction of light with biological systems, molecular mechanisms of wound healing, bone regeneration, nerve regeneration, pain modulation, as well as biomineralization and biofilm formulation processes induced by nanobacteria.

  20. Second international conference on isotopes. Conference proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Hardy, C.J. [ed.

    1997-10-01

    The Second International Conference on Isotopes (2ICI) was hosted by the Australian Nuclear Association in Sydney, NSW, Australia. The Theme of the Second Conference: Isotopes for Industry, Health and a Better Environment recognizes that isotopes have been used in these fields successfully for many years and offer prospects for increasing use in the future. The worldwide interest in the use of research reactors and accelerators and in applications of stable and radioactive isotopes, isotopic techniques and radiation in industry, agriculture, medicine, environmental studies and research in general, was considered. Other radiation issues including radiation protection and safety were also addressed. International and national overviews and subject reviews invited from leading experts were included to introduce the program of technical sessions. The invited papers were supported by contributions accepted from participants for oral and poster presentation. A Technical Exhibition was held in association with the Conference. This volume contains the full text or extended abstracts of papers number 61- to number 114

  1. Scalable geocomputation: evolving an environmental model building platform from single-core to supercomputers

    Science.gov (United States)

    Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek

    2017-04-01

    There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework (http://www.pcraster.eu) providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using

  2. Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer.

    Science.gov (United States)

    Hines, Michael; Kumar, Sameer; Schürmann, Felix

    2011-01-01

    For neural network simulations on parallel machines, interprocessor spike communication can be a significant portion of the total simulation time. The performance of several spike exchange methods using a Blue Gene/P (BG/P) supercomputer has been tested with 8-128 K cores using randomly connected networks of up to 32 M cells with 1 k connections per cell and 4 M cells with 10 k connections per cell, i.e., on the order of 4·10(10) connections (K is 1024, M is 1024(2), and k is 1000). The spike exchange methods used are the standard Message Passing Interface (MPI) collective, MPI_Allgather, and several variants of the non-blocking Multisend method either implemented via non-blocking MPI_Isend, or exploiting the possibility of very low overhead direct memory access (DMA) communication available on the BG/P. In all cases, the worst performing method was that using MPI_Isend due to the high overhead of initiating a spike communication. The two best performing methods-the persistent Multisend method using the Record-Replay feature of the Deep Computing Messaging Framework DCMF_Multicast; and a two-phase multisend in which a DCMF_Multicast is used to first send to a subset of phase one destination cores, which then pass it on to their subset of phase two destination cores-had similar performance with very low overhead for the initiation of spike communication. Departure from ideal scaling for the Multisend methods is almost completely due to load imbalance caused by the large variation in number of cells that fire on each processor in the interval between synchronization. Spike exchange time itself is negligible since transmission overlaps with computation and is handled by a DMA controller. We conclude that ideal performance scaling will be ultimately limited by imbalance between incoming processor spikes between synchronization intervals. Thus, counterintuitively, maximization of load balance requires that the distribution of cells on processors should not reflect

  3. PREFACE: International Conference on Computing in High Energy and Nuclear Physics (CHEP'07)

    Science.gov (United States)

    Sobie, Randall; Tafirout, Reda; Thomson, Jana

    2007-07-01

    The 2007 International Conference on Computing in High Energy and Nuclear Physics (CHEP) was held on 2-7 September 2007 in Victoria, British Columbia, Canada. CHEP is a major series of international conferences for physicists and computing professionals from the High Energy and Nuclear Physics community, Computer Science and Information Technology. The CHEP conference provides an international forum to exchange information on computing experience and needs for the community, and to review recent, ongoing, and future activities. The CHEP'07 conference had close to 500 attendees with a program that included plenary sessions of invited oral presentations, a number of parallel sessions comprising oral and poster presentations, and an industrial exhibition. Conference tracks covered topics in Online Computing, Event Processing, Software Components, Tools and Databases, Software Tools and Information Systems, Computing Facilities, Production Grids and Networking, Grid Middleware and Tools, Distributed Data Analysis and Information Management and Collaborative Tools. The conference included a successful whale-watching excursion involving over 200 participants and a banquet at the Royal British Columbia Museum. The next CHEP conference will be held in Prague in March 2009. We would like thank the sponsors of the conference and the staff at the TRIUMF Laboratory and the University of Victoria who made the CHEP'07 a success. Randall Sobie and Reda Tafirout CHEP'07 Conference Chairs

  4. Participation willingness in web surveys: exploring effect of sponsoring corporation's and survey provider's reputation.

    Science.gov (United States)

    Fang, Jiaming; Wen, Chao; Pavur, Robert

    2012-04-01

    Prior research involving response rates in Web-based surveys has not adequately addressed the effect of the reputation of a sponsoring corporation that contracts with a survey provider. This study investigates the effect of two factors, namely, the reputation of a survey's provider and the reputation of a survey's sponsoring corporation, on the willingness of potential respondents to participate in a Web survey. Results of an experimental design with these two factors reveal that the sponsoring corporation's and the survey provider's strong reputations can induce potential respondents to participate in a Web survey. A sponsoring corporation's reputation has a greater effect on the participation willingness of potential respondents of a Web survey than the reputation of the survey provider. A sponsoring corporation with a weak reputation who contracts with a survey provider having a strong reputation results in increased participation willingness from potential respondents if the identity of the sponsoring corporation is disguised in a survey. This study identifies the most effective strategy to increase participation willingness for a Web-based survey by considering both the reputations of the sponsoring corporation and survey provider and whether to reveal their identities.

  5. The Influence of Sponsor-Event Congruence in Sponsorship of Music Festivals

    Directory of Open Access Journals (Sweden)

    Penny Hutabarat

    2014-04-01

    Full Text Available This paper focuses the research on the Influence of Sponsor-Event Congruence toward Brand Image, Attitudes toward the Brand and Purchase Intention. Having reviewed the literatures and arranged the hypotheses, the data has been gathered by distributing the questionnaire to 155 audiences at the Java Jazz Music Festival, firstly with convenience sampling and then snowballing sampling approach. The analysis of data was executed with Structural Equation Modeling (SEM. The result shows the sponsor-event congruence variable has a positive impact toward brand image and attitudes toward the brand sponsor. Brand Image also has a positive impact toward purchase intention; in contrary attitudes toward the brand do not have a positive purchase intention. With those results, to increase the sponsorship effectiveness, the role of congruency is very significant in the sponsorship event. Congruency is a key influencer to trigger the sponsorship effectiveness. Congruency between the event and the sponsor is able to boost up the brand image and bring out favorable attitudes towards the brand for the success of marketing communication programs, particularly sponsorship. In addition to it, image transfer gets higher due to the congruency existence (fit between sponsor and event and directs the intention creation to buy sponsor brand product/service (purchase intention. In conclusion, sponsor-event congruence has effect on consumer responds toward sponsorship, either on the cognitive level, affective and also behavior.

  6. Radiation`96. Conference handbook

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-31

    The conference program includes eight invited lectures which cover a range of contemporary topics in radiation science and technology. In addition, thirty-two oral papers were presented, along with forty-five posters. The conference handbook contains one-page precis or extended abstracts of all presentations, and is a substantial compendium of current radiation research in Australia.

  7. Expectations for Cancun Conference

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Compared with the great hopes raised by the Copenhagen Climate Conference in 2009, the 2010 UN Climate Change Conference in Cancun aroused fewer expectations. However, the international community is still waiting for a positive outcome that will benefit humankind as a whole.

  8. Criminal implication of sponsoring in medicine: legal ramifactions and recommendations; Strafrechtliche Bedeutung des Sponsorings in der Medizin: Gesetzliche Rahmenbedingungen und Handlungsempfehlungen

    Energy Technology Data Exchange (ETDEWEB)

    Mahnken, A.H.; Guenther, R.W. [Klinik fuer Radiologische Diagnostik, Universitaetsklinikum Aachen (Germany); Theilmann, M. [Rechtsanwalt Martin Theilmann, Osnabrueck (Germany); Bolenz, M. [Fakultaet Wirtschafts- und Sozialwissenschaften, Fachhochschule Osnabrueck (Germany)

    2005-08-01

    As a consequence of the so-called ''Heart-Valve-Affair'' in 1994, the German public became aware of the potential criminal significance of industrial sponsoring and third-party financial support in medicine. Since 1997, when the German Anti-Corruption Law came into effect, the penal regulations regarding bribery and benefits for public officers were tightened. Due to the lack of explicit and generally accepted guidelines in combination with regional differences of jurisdiction, there is a lingering uncertainty regarding the criminal aspects of third-party funding and industrial sponsoring. The aim of this review is to summarize the penal and professional implications of third-party funding and sponsoring in medicine including recent aspects of jurisdiction. The currently available recommendations on this issue are introduced. (orig.)

  9. Proceedings of the 1995 oil heat technology conference and workshop

    Energy Technology Data Exchange (ETDEWEB)

    McDonald, R.J.

    1995-04-01

    This report documents the Proceedings of the 1995 Oil Heat Technology Conference and Workshop, held on March 22-23 at Brookhaven National Laboratory (BNL), and sponsored by the U.S. Department of Energy - Office of Building Technologies (DOE-OBT), in cooperation with the Petroleum Marketers Association of America. This Conference, which was the ninth held since 1984, is a key technology transfer activity supported by the ongoing Combustion Equipment Technology (Oil-Heat R&D) program at BNL, and is aimed at providing a forum for the exchange of information among international researchers, engineers, manufacturers, and marketers of oil-fired space-conditioning equipment. The objectives of the Conference were to: (1) Identify and evaluate the state-of-the-art and recommend new initiatives for higher efficiency, a cleaner environment, and to satisfy consumer needs cost-effectively, reliably, and safely; (2) Foster cooperation among federal and industrial representatives with the common goal of sustained national economic growth and energy security via energy conservation. The 1995 Oil Technology Conference comprised: (a) three plenary sessions devoted to presentations and summations by public and private sector industry representatives from the United States, and Canada, and (b) four workshops which focused on mainstream issues in oil-heating technology. Individual reports presented at the conference have been processed separately for database entry.

  10. 3rd Cryocooler Conference

    CERN Document Server

    Louie, Berverly; McCarthy, Sandy

    1985-01-01

    Cryocoolers 3 documents the output of the Third Cryocooler Conference, held at the National Bureau of Standards, Boulder, Colorado, on September 17-18, 1984. About 140 people from 10 countries attended the conference representing industry, government, and academia. A total of 26 papers were presented orally at the conference and all appear in written form in the proceedings. The focus of this conference was on small cryocoolers in the temperature range of 4 - 80 K. Mechanical and nonmechanical types are discussed in the various papers. Applications of these small cryocoolers include the cooling of infrared detectors, cryopumps, small superconducting devices and magnets, and electronic devices. The conference proceedings reproduced here was published by the National Bureau of Standards in Boulder, Colorado as NBS Special Publication #698.

  11. 2010 China International Friendship Cities Conference Held in Shanghai

    Institute of Scientific and Technical Information of China (English)

    Our Staff Reporter

    2010-01-01

    <正>The 2010 China International Friendship Cities Conference under the theme of "City Cooperation Leads to a Better Life" convened in Shanghai September 8-10,co-sponsored by the CPAFFC and the China International Friendship-Cities Association(CIFCA) and hosted by the Shanghai Municipal Government.A total of 427 representatives from local governments,local governments’ organizations,friendship-city organizations and friendship associations in 47 countries as well as 144 Chinese local government officials from 31 provinces(autonomous regions and municipalities directly under the Central Government) participated.

  12. Second international conference on isotopes. Conference proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Hardy, C.J. [ed.

    1997-10-01

    The Second International Conference on Isotopes (2ICI) was hosted by the Australian Nuclear Association in Sydney, NSW, Australia. The Theme of the Second Conference: Isotopes for Industry, Health and a Better Environment recognizes that isotopes have been used in these fields successfully for many years and offer prospects for increasing use in the future. The worldwide interest in the use of research reactors and accelerators and in applications of stable and radioactive isotopes, isotopic techniques and radiation in industry, agriculture, medicine, environmental studies and research in general, was considered. Other radiation issues including radiation protection and safety were also addressed. International and national overviews and subject reviews invited from leading experts were included to introduce the program of technical sessions. The invited papers were supported by contributions accepted from participants for oral and poster presentation. A Technical Exhibition was held in association with the Conference. This volume contains the foreword, technical program, the author index and of the papers (1-60) presented at the conference.

  13. Proposal of a Desk-Side Supercomputer with Reconfigurable Data-Paths Using Rapid Single-Flux-Quantum Circuits

    Science.gov (United States)

    Takagi, Naofumi; Murakami, Kazuaki; Fujimaki, Akira; Yoshikawa, Nobuyuki; Inoue, Koji; Honda, Hiroaki

    We propose a desk-side supercomputer with large-scale reconfigurable data-paths (LSRDPs) using superconducting rapid single-flux-quantum (RSFQ) circuits. It has several sets of computing unit which consists of a general-purpose microprocessor, an LSRDP and a memory. An LSRDP consists of a lot of, e. g., a few thousand, floating-point units (FPUs) and operand routing networks (ORNs) which connect the FPUs. We reconfigure the LSRDP to fit a computation, i. e., a group of floating-point operations, which appears in a ‘for’ loop of numerical programs by setting the route in ORNs before the execution of the loop. We propose to implement the LSRDPs by RSFQ circuits. The processors and the memories can be implemented by semiconductor technology. We expect that a 10 TFLOPS supercomputer, as well as a refrigerating engine, will be housed in a desk-side rack, using a near-future RSFQ process technology, such as 0.35μm process.

  14. 超级计算中心核心应用的浅析%Brief Exploration on Technical Development of Key Applications at Supercomputing Center

    Institute of Scientific and Technical Information of China (English)

    党岗; 程志全

    2013-01-01

    目前,我国国家级超算中心大多采用“地方政府投资、以市场为导向开展应用”的建设思路,地方政府更关心涉及本地企事业单位的高性能计算应用和服务,超算中心常被用于普通的应用,很难充分发挥超级计算的战略作用.如何让超算中心这艘能力超强的航母生存下来,进而“攻城掠地”,推动技术创新,一直是业内人士研究的课题.初步探讨了国内超算中心核心应用所面临的挑战,提出了超算中心核心应用服务地方建设的几点建议.%National supercomputing centers at China work use building thought of local government investigation, and market-oriented application performing. Supercomputing resources are always applied at general applications,as the local govenment more focuses on the high-performance computing applications and services related to local business, rather than supercomputing working as strategical role in the traditional way. It is a long-term researching topic how to make the supercomputing powerful as a super-carrier active and applicable to benefit the technical innovation. Some challenging technical issues suiting for the superomputing were discussed by taking domestic supercomputing center as the example, and some useful advises were addressed for applying international supercomputing center at local services.

  15. The story of a small Canadian congregation sponsoring a refugee family

    Directory of Open Access Journals (Sweden)

    Shannon Tito

    2017-02-01

    Full Text Available Steps for private refugee sponsorship in Canada are not clearly spelled out for those seeking to be sponsors. While the process is rewarding, it is also challenging and sometimes frustrating.

  16. Measuring Consumer Reactions to Sponsoring Partnerships Based upon Emotional and Attitudinal Responses

    DEFF Research Database (Denmark)

    Riis Christensen, Sverre

    2004-01-01

    Consumers' reactions from being exposed to sponsorships has primarily been measured and docu-mented applying cognitive information processing models to the phenomenon. In the paper it is argued that such effects are probably better modelled applying models of peripheral information processing...... to the measurements, and it is suggested that the effects can be measured on the atti-tudes-towards-the sponsor and on the emotion-towards-the sponsor levels. This type of modelling is known as the ELAM model, however the types of independent variables involved is new to research into sponsorship effects. Two...... in consumer reactions towards sponsored objects of different natures as well as towards potential sponsoring organisations. For instance, the charitable institutions measured in the study elicit larger negative emotional re-sponses than positive responses, corresponding to a negative Net Emotional Response...

  17. Industry-Sponsored Dental Health Teaching Aids: Selection Criteria and Program Examples.

    Science.gov (United States)

    Travis, Donna L.

    1982-01-01

    Ten questions are provided to facilitate selection and evaluation of materials for a dental health curriculum. Examples of industry-sponsored dental health programs available free or at minimal cost are given. (JN)

  18. 45 CFR 2551.24 - What are a sponsor's responsibilities for securing community participation?

    Science.gov (United States)

    2010-10-01

    ... volunteerism; (3) Capable of helping the sponsor meet its administrative and program responsibilities including fund-raising, publicity and impact programming; (4) With interest in and knowledge of the capability of...

  19. 45 CFR 2553.24 - What are a sponsor's responsibilities for securing community participation?

    Science.gov (United States)

    2010-10-01

    ... community service and volunteerism; (3) Capable of helping the sponsor meet its administrative and program responsibilities including fund-raising, publicity and programming for impact; (4) With an interest in and...

  20. 45 CFR 2552.24 - What are a sponsor's responsibilities for securing community participation?

    Science.gov (United States)

    2010-10-01

    ..., volunteerism and children's issues; (3) Capable of helping the sponsor meet its administrative and program responsibilities including fund-raising, publicity and programming for impact; (4) With interest in and knowledge...

  1. [U.S. Fish and Wildlife Service Sponsored Projects : 1987-1997

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This document offers a list, as well as a timeline, of projects which the U.S. Fish and Wildlife Service has sponsored since establishing a field station at Rocky...

  2. 76 FR 2807 - New Animal Drugs; Change of Sponsor; Follicle Stimulating Hormone

    Science.gov (United States)

    2011-01-18

    ... sponsor for a new animal drug application (NADA) for follicle stimulating hormone from Ausa International... transferred ownership of, and all rights and interest in, NADA 141-014 for SUPER-OV (follicle...

  3. 75 FR 20268 - Implantation or Injectable Dosage Form New Animal Drugs; Change of Sponsor; Propofol

    Science.gov (United States)

    2010-04-19

    ... change of sponsor for a new animal drug application (NADA) from Intervet, Inc., to Teva Animal Health... and interest in, approved NADA 141-070 for RAPINOVET (propofol), an ] injectable anesthetic, to...

  4. Conference Report: Wyoming Invitational Conference on Instructional Applications of Computers.

    Science.gov (United States)

    Kansky, Bob

    This report: (1) describes the organization of an invitational conference aimed at gathering direction from classroom teachers regarding instructional applications of computers; (2) provides copies of all materials used in organizing such a conference; and (3) reports the results of the conference in terms of conference products (resolutions,…

  5. Effects of sponsorship disclosure timing on the processing of sponsored content: a study on the effectiveness of European disclosure regulations

    NARCIS (Netherlands)

    Boerman, S.C.; van Reijmersdal, E.A.; Neijens, P.C.

    2014-01-01

    This study investigates whether the timing of sponsorship disclosure affects viewers’ processing of sponsored content, and whether a disclosure influences the persuasive effect of the sponsored content. A model is proposed in which sponsorship disclosure enhances the recognition of sponsored televis

  6. An Empirical Analysis of Search Engine Advertising: Sponsored Search in Electronic Markets

    OpenAIRE

    Anindya Ghose; Sha Yang

    2009-01-01

    The phenomenon of sponsored search advertising--where advertisers pay a fee to Internet search engines to be displayed alongside organic (nonsponsored) Web search results--is gaining ground as the largest source of revenues for search engines. Using a unique six-month panel data set of several hundred keywords collected from a large nationwide retailer that advertises on Google, we empirically model the relationship between different sponsored search metrics such as click-through rates, conve...

  7. Sponsoring in a Finnish Women's Football Club : Case: Pallokissat Kuopio Ry

    OpenAIRE

    Mikkonen, Marjukka

    2017-01-01

    This thesis studies the challenges of sponsoring in Finnish female football. The aim of this research was to examine the challenges women’s football clubs face in sponsorship management and suggest improvements for sponsorship management of Pallokissat Kuopio ry. Pallokissat plays in Naisten Liiga and is one of the biggest football clubs for girls and women in eastern Finland. This research focuses on the viewpoint of the sponsored party but in order to receive relevant and applicable data so...

  8. International Conference of Intelligence Computation and Evolutionary Computation ICEC 2012

    CERN Document Server

    Intelligence Computation and Evolutionary Computation

    2013-01-01

    2012 International Conference of Intelligence Computation and Evolutionary Computation (ICEC 2012) is held on July 7, 2012 in Wuhan, China. This conference is sponsored by Information Technology & Industrial Engineering Research Center.  ICEC 2012 is a forum for presentation of new research results of intelligent computation and evolutionary computation. Cross-fertilization of intelligent computation, evolutionary computation, evolvable hardware and newly emerging technologies is strongly encouraged. The forum aims to bring together researchers, developers, and users from around the world in both industry and academia for sharing state-of-art results, for exploring new areas of research and development, and to discuss emerging issues facing intelligent computation and evolutionary computation.

  9. NATO Conference on Work, Organizations, and Technological Change

    CERN Document Server

    Niehaus, Richard

    1982-01-01

    This volume is the proceedings of the Symposium entitled, "Work, Organizations and Technological Change" which was held in Garmisch-Partenkirchen, West Germany, 14-19 June 1981. The meeting was sponsored by the Special Panel on Systems Sciences of the NATO Scientific Affairs Division. In proposing this meeting the Symposium Directors built upon several preceding NATO conferences in the general area of personnel systems, manpower modelling, and organization. The most recent NATO Conference, entitled "Manpower Planning and Organization Design," was held in Stresa, Italy in 1977. That meeting was organized to foster research on the interrelationships between programmatic approaches to personnel planning within organizations and behavioral science approachs to organization design. From that context of corporate planning the total internal organizational perspective was the MACRO view, and the selection, assignment, care and feeding of the people was the MICRO view. Conceptually, this meant that an integrated appr...

  10. The global crisis of malaria: report on a Yale conference.

    Science.gov (United States)

    Snowden, Frank M

    2009-03-01

    An international conference, "The Global Crisis of Malaria: Lessons of the Past and Future Prospects," met at Yale University, November 7-9, 2008. The symposium was organized by Professor Frank Snowden and sponsored by the Provost's office, the MacMillan Center, the Program in the History of Science and History of Medicine, and the Section of the History of Medicine at the Yale School of Medicine. It brought together experts on malaria from a variety of disciplines, countries, and experiences--physicians, research scientists, historians of medicine, public health officials, and representatives of several non-governmental organizations (NGOs). An underlying theme was that much could be gained from a big-picture examination across disciplinary frontiers of the contemporary public health problem caused by malaria. Particular features of the conference were its intense scrutiny of historical successes and failures in malaria control and its demonstration of the relevance of history to policy discussions in the field.

  11. Proceedings of the 1993 oil heat technology conference and workshop

    Energy Technology Data Exchange (ETDEWEB)

    McDonald, R.J.

    1993-09-01

    This report documents the proceedings of the 1993 Oil Heat Technology Conference and Workshop, held on March 25--26 at Brookhaven National Laboratory (BNL), and sponsored by the US Department of Energy - Office of Building Technologies (DOE-OBT), in cooperation with the Petroleum Marketers Association of America. This Conference, which was the seventh held since 1984, is a key technology-transfer activity supported by the ongoing Combustion Equipment Technology (Oil-Heat R&D) program at BNL, and is aimed at providing a forum for the exchange of information among international researchers, engineers, manufacturers, and marketers of oil-fired space- conditioning equipment. Selected papers have been processed separately for inclusion in the Energy Science and Technology Database.

  12. 9th International Conference on Genetic and Evolutionary Computing

    CERN Document Server

    Lin, Jerry; Pan, Jeng-Shyang; Tin, Pyke; Yokota, Mitsuhiro; Genetic and Evolutionary Computing

    2016-01-01

    This volume of Advances in Intelligent Systems and Computing contains accepted papers presented at ICGEC 2015, the 9th International Conference on Genetic and Evolutionary Computing. The conference this year was technically co-sponsored by Ministry of Science and Technology, Myanmar, University of Computer Studies, Yangon, University of Miyazaki in Japan, Kaohsiung University of Applied Science in Taiwan, Fujian University of Technology in China and VSB-Technical University of Ostrava. ICGEC 2015 is held from 26-28, August, 2015 in Yangon, Myanmar. Yangon, the most multiethnic and cosmopolitan city in Myanmar, is the main gateway to the country. Despite being the commercial capital of Myanmar, Yangon is a city engulfed by its rich history and culture, an integration of ancient traditions and spiritual heritage. The stunning SHWEDAGON Pagoda is the center piece of Yangon city, which itself is famous for the best British colonial era architecture. Of particular interest in many shops of Bogyoke Aung San Market,...

  13. Le sponsoring politique : un défi et espoir pour la communication politique

    Directory of Open Access Journals (Sweden)

    H. Zouabi

    2017-03-01

    Full Text Available Dans le monde économique géré par la concurrence acharnée et exacerbée, l’entreprise tend à développer et diversifier ses moyens de communication et profiter des changements de l’environnement. En effet, à côté des médias traditionnels, l’entreprise s’appuie sur d’autres moyens de communication à savoir le sponsoring. Aujourd’hui le sponsoring est un moyen efficace dans la stratégie de communication des entreprises. Il ne cesse de se développer dans tous les domaines à savoir le sport, la culture, la santé, la politique, etc. Toutefois, les recherches spécialisées, professionnelles et académiques sur le sponsoring politique sont très limitées. C’est pourquoi cette recherche a été proposée. L’objectif de cet article est donc d’explorer le sponsoring politique comme un défi et espoir pour la communication politique. Les résultats de cette recherche montrent que les entreprises considèrent le sponsoring comme étant une variable stratégique. Pour le sponsoring politique, l’attitude des entreprises enquêtées varie largement en fonction de leur perception des risques et de leurs attentes pour les marques.

  14. Enhancing family advocacy networks: an analysis of the roles of sponsoring organizations.

    Science.gov (United States)

    Briggs, H E; Koroloff, N M

    1995-08-01

    Family participation in shaping system reforms in children's mental health has increased over the past ten years. In 1990 the National Institute of Mental Health funded the development and enhancement of 15 statewide advocacy organizations that were to be controlled and staffed by families of children who have serious emotional disorders. These family advocacy organizations had three major goals: to establish support networks, to advocate for service system reforms, and to develop statewide family advocacy networks. Seven family advocacy networks worked with sponsoring organizations because they needed assistance and/or could not receive funding directly. State and local chapters of the National Alliance for the Mentally Ill and the National Mental Health Association served in this capacity. Because there were no guidelines to educate sponsoring organizations about their interorganizational roles and responsibilities, staff of some sponsoring organizations used approaches that were supportive and effective, while staff in other organizations used methods that were counterproductive. The information and recommendations discussed in this paper are based on evaluation data and observations of the relationships between seven sponsoring organizations and family advocacy groups over a three-year period. This paper proposes a conceptual framework that includes: (1) a clear definition of the sponsoring organization's roles, and (2) an analysis of the advantages, limitations, and critical issues for the sponsoring organization.

  15. 23rd International Conference on Industrial Engineering and Engineering Management 2016

    CERN Document Server

    Shen, Jiang; Dou, Runliang

    2017-01-01

    International Conference on Industrial Engineering and Engineering Management is sponsored by Chinese Industrial Engineering Institution, CMES, which is the unique national-level academic society of Industrial Engineering. The conference is held annually as the major event in this area. Being the largest and the most authoritative international academic conference held in China, it supplies an academic platform for the experts and the entrepreneurs in International Industrial Engineering and Management area to exchange their research results. Many experts in various fields from China and foreign countries gather together in the conference to review, exchange, summarize and promote their achievements in Industrial Engineering and Engineering Management fields. Some experts pay special attention to the current situation of the related techniques application in China as well as their future prospect, such as Industry 4.0, Green Product Design, Quality Control and Management, Supply Chain and logistics Management...

  16. Energy optimization of water and wastewater management for municipal and industrial applications conference

    Energy Technology Data Exchange (ETDEWEB)

    1980-08-01

    These proceedings document the presentations given at the Energy Optimization of Water and Wastewater Management for Municipal and Industrial Applications, Conference, sponsored by the Department of Energy (DOE). The conference was organized and coordinated by Argonne National Laboratory. The conference focused on energy use on conservation in water and wastewater. The General Session also reflects DOE's commitment to the support and development of waste and wastewater systems that are environmentally acceptable. The conference proceedings are divided into two volumes. Volume 1 contains the General Session and Sessions 1 to 5. Volume 2 covers Sessions 6 to 12. Separate abstracts are prepared for each item within the scope of the Energy Data Base.

  17. Energy optimization of water and wastewater management for municipal and industrial applications conference

    Energy Technology Data Exchange (ETDEWEB)

    1980-08-01

    These proceedings document the presentations given at the Energy Optimization of Water and Wastewater Management for Municipal and Industrial Applications Conference, sponsored by the Department of Energy (DOE). The conference was organized and coordinated by Argonne National Laboratory. The conference focused on energy use and conservation in water and wastewater. The General Session also reflects DOE's commitment to the support and development of waste and wastewater systems that are environmentally acceptable. The conference proceedings are divided into two volumes. Volume 1 contains the General Session and Sessions 1 to 5. Volume 2 covers Sessions 6 to 12. Separate abstracts are prepared for each item within the scope of the Energy Data Base.

  18. Major Conference about Astronomical Technology in Munich

    Science.gov (United States)

    2000-03-01

    Press Conference on Monday, March 27, 2000 Which are the latest astronomical discoveries made with the new 8-10 metre class astronomical telescopes? Will it be possible to construct even more powerful instruments on the ground and in space to explore the near and distant Universe at all wavelengths from gamma-rays to radio waves? Which research areas in this dynamical science are likely to achieve break-throughs with emerging new technologies? These are some of the central themes that will be discussed by more than 600 specialists from all over the world at an international conference in Munich (Germany), "Astronomical Telescopes and Instruments 2000" , beginning on Monday, March 27, 2000. During five days, the modern architecture of the new International Congress Center in the Bavarian capital will be the scene of lively exchanges about recent progress at the world's top-class astronomical research facilities and the presentation of inspired new ideas about future technological opportunities. The conference will be accompanied by numerous on-site exhibition stands by the major industries and research organisations in this wide field. This meeting is the latest in a series, organised every second year, alternatively in the USA and Europe by the International Society for Optical Engineering (SPIE) , this year with the European Southern Observatory (ESO) as co-sponsor and host institution. The conference will be opened in the morning of March 27 by the Bavarian Minister of Science, Research and Arts, Hans Zehetmair . His address will be followed by keynote speeches by Massimo Tarenghi (European Southern Observatory), James B. Breckenridge (National Science Foundation, USA), Harvey Butcher (Netherlands Foundation for Research in Astronomy) and Albrecht Ruediger (Max Planck Institut für Quantenoptik, Germany). The conference is subtitled "Power Telescopes and Instrumentation into the New Millennium" and will be attended by leading scientists and engineers from all

  19. The learning conference

    DEFF Research Database (Denmark)

    Ravn, Ib

    The typical one-day conference attended by managers or professionals in search of inspiration is packed with PowerPoint presentations and offers little opportunity for involvement or knowledge sharing. Behind the conventional conference format lurks the transfer model of learning, which finds...... little support amongst serious students of learning. The professional conference as a forum for knowledge sharing is in dire need of a new learning theory and a more enlightened practice. The notion of human flourishing is offered as basis for theory, and four simple design principles for the so......-called “learning conference” are proposed: People go to conferences to 1. get concise input, 2. interpret it in the light of their ongoing concerns, 3. talk about their current projects and 4. meet the other attendees and be inspired by them. Six practical techniques that induce attendees to do these things...

  20. Statewide Professional Development Conference

    Directory of Open Access Journals (Sweden)

    Paul V. Bredeson

    2000-02-01

    Full Text Available In an environment increasingly skeptical of the effectiveness of large-scale professional development activities, this study examines K-12 educators' reasons for participating and beliefs in the utility in a large-scale professional development conference. Pre- and post-conference surveys revealed that while financial support played a significant role in educators' ability to participate, they were drawn to the conference by the promise to learn substantive issues related to, in this case, performance assessment—what it means, how to implement it, and how to address community concerns. In spite of the conference's utility as a means to increase awareness of critical issues and to facilitate formal and informal learning, well conceived linkages to transfer new knowledge to the school and classroom were lacking.