WorldWideScience

Sample records for advanced supercomputing nas

  1. NASA Advanced Supercomputing Facility Expansion

    Science.gov (United States)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  2. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  3. Supercomputing - Use Cases, Advances, The Future (2/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the second day, we will focus on software and software paradigms driving supercomputers, workloads that need supercomputing treatment, advances in technology and possible future developments. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and i...

  4. Supercomputing - Use Cases, Advances, The Future (1/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the first day, we will focus on the history and theory of supercomputing, the top500 list and the hardware that makes supercomputers tick. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP an...

  5. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  6. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-03-29

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  7. An assessment of worldwide supercomputer usage

    Energy Technology Data Exchange (ETDEWEB)

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  8. Japanese supercomputer technology

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Ewald, R.H.; Worlton, W.J.

    1982-01-01

    In February 1982, computer scientists from the Los Alamos National Laboratory and Lawrence Livermore National Laboratory visited several Japanese computer manufacturers. The purpose of these visits was to assess the state of the art of Japanese supercomputer technology and to advise Japanese computer vendors of the needs of the US Department of Energy (DOE) for more powerful supercomputers. The Japanese foresee a domestic need for large-scale computing capabilities for nuclear fusion, image analysis for the Earth Resources Satellite, meteorological forecast, electrical power system analysis (power flow, stability, optimization), structural and thermal analysis of satellites, and very large scale integrated circuit design and simulation. To meet this need, Japan has launched an ambitious program to advance supercomputer technology. This program is described

  9. The ETA10 supercomputer system

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, Inc. ETA 10 is a next-generation supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamics MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed. (orig.)

  10. The ETA systems plans for supercomputers

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, is a class VII supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamic MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed

  11. What is supercomputing ?

    International Nuclear Information System (INIS)

    Asai, Kiyoshi

    1992-01-01

    Supercomputing means the high speed computation using a supercomputer. Supercomputers and the technical term ''supercomputing'' have spread since ten years ago. The performances of the main computers installed so far in Japan Atomic Energy Research Institute are compared. There are two methods to increase computing speed by using existing circuit elements, parallel processor system and vector processor system. CRAY-1 is the first successful vector computer. Supercomputing technology was first applied to meteorological organizations in foreign countries, and to aviation and atomic energy research institutes in Japan. The supercomputing for atomic energy depends on the trend of technical development in atomic energy, and the contents are divided into the increase of computing speed in existing simulation calculation and the acceleration of the new technical development of atomic energy. The examples of supercomputing in Japan Atomic Energy Research Institute are reported. (K.I.)

  12. A training program for scientific supercomputing users

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  13. Comparison of 250 MHz R10K Origin 2000 and 400 MHz Origin 2000 Using NAS Parallel Benchmarks

    Science.gov (United States)

    Turney, Raymond D.; Thigpen, William W. (Technical Monitor)

    2001-01-01

    This report describes results of benchmark tests on Steger, a 250 MHz Origin 2000 system with R10K processors, currently installed at the NASA Ames National Advanced Supercomputing (NAS) facility. For comparison purposes, the tests were also run on Lomax, a 400 MHz Origin 2000 with R12K processors. The BT, LU, and SP application benchmarks in the NAS Parallel Benchmark Suite and the kernel benchmark FT were chosen to measure system performance. Having been written to measure performance on Computational Fluid Dynamics applications, these benchmarks are assumed appropriate to represent the NAS workload. Since the NAS runs both message passing (MPI) and shared-memory, compiler directive type codes, both MPI and OpenMP versions of the benchmarks were used. The MPI versions used were the latest official release of the NAS Parallel Benchmarks, version 2.3. The OpenMP versions used were PBN3b2, a beta version that is in the process of being released. NPB 2.3 and PBN3b2 are technically different benchmarks, and NPB results are not directly comparable to PBN results.

  14. Supercomputational science

    CERN Document Server

    Wilson, S

    1990-01-01

    In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School ...

  15. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee

    2011-11-15

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  16. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee; Kaushik, Dinesh; Winfer, Andrew

    2011-01-01

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  17. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  18. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 2

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  19. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 1

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  20. Computational Dimensionalities of Global Supercomputing

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2013-12-01

    Full Text Available This Invited Paper pertains to subject of my Plenary Keynote Speech at the 17th World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI 2013 held in Orlando, Florida on July 9-12, 2013. The title of my Plenary Keynote Speech was: "Dimensionalities of Computation: from Global Supercomputing to Data, Text and Web Mining" but this Invited Paper will focus only on the "Computational Dimensionalities of Global Supercomputing" and is based upon a summary of the contents of several individual articles that have been previously written with myself as lead author and published in [75], [76], [77], [78], [79], [80] and [11]. The topics of these of the Plenary Speech included Overview of Current Research in Global Supercomputing [75], Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing [76], Data Mining Supercomputing with SAS™ JMP® Genomics ([77], [79], [80], and Visualization by Supercomputing Data Mining [81]. ______________________ [11.] Committee on the Future of Supercomputing, National Research Council (2003, The Future of Supercomputing: An Interim Report, ISBN-13: 978-0-309-09016- 2, http://www.nap.edu/catalog/10784.html [75.] Segall, Richard S.; Zhang, Qingyu and Cook, Jeffrey S.(2013, "Overview of Current Research in Global Supercomputing", Proceedings of Forty- Fourth Meeting of Southwest Decision Sciences Institute (SWDSI, Albuquerque, NM, March 12-16, 2013. [76.] Segall, Richard S. and Zhang, Qingyu (2010, "Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing", Proceedings of 5th INFORMS Workshop on Data Mining and Health Informatics, Austin, TX, November 6, 2010. [77.] Segall, Richard S., Zhang, Qingyu and Pierce, Ryan M.(2010, "Data Mining Supercomputing with SAS™ JMP®; Genomics: Research-in-Progress, Proceedings of 2010 Conference on Applied Research in Information Technology, sponsored by

  1. Supercomputer applications in nuclear research

    International Nuclear Information System (INIS)

    Ishiguro, Misako

    1992-01-01

    The utilization of supercomputers in Japan Atomic Energy Research Institute is mainly reported. The fields of atomic energy research which use supercomputers frequently and the contents of their computation are outlined. What is vectorizing is simply explained, and nuclear fusion, nuclear reactor physics, the hydrothermal safety of nuclear reactors, the parallel property that the atomic energy computations of fluids and others have, the algorithm for vector treatment and the effect of speed increase by vectorizing are discussed. At present Japan Atomic Energy Research Institute uses two systems of FACOM VP 2600/10 and three systems of M-780. The contents of computation changed from criticality computation around 1970, through the analysis of LOCA after the TMI accident, to nuclear fusion research, the design of new type reactors and reactor safety assessment at present. Also the method of using computers advanced from batch processing to time sharing processing, from one-dimensional to three dimensional computation, from steady, linear to unsteady nonlinear computation, from experimental analysis to numerical simulation and so on. (K.I.)

  2. Role of supercomputers in magnetic fusion and energy research programs

    International Nuclear Information System (INIS)

    Killeen, J.

    1985-06-01

    The importance of computer modeling in magnetic fusion (MFE) and energy research (ER) programs is discussed. The need for the most advanced supercomputers is described, and the role of the National Magnetic Fusion Energy Computer Center in meeting these needs is explained

  3. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy; Katz, Daniel S.; Binkowski, T. Andrew; Zhong, Xiaoliang; Heinonen, Olle; Karpeyev, Dmitry; Wilde, Michael

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt's sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.

  4. Status of supercomputers in the US

    International Nuclear Information System (INIS)

    Fernbach, S.

    1985-01-01

    Current Supercomputers; that is, the Class VI machines which first became available in 1976 are being delivered in greater quantity than ever before. In addition, manufacturers are busily working on Class VII machines to be ready for delivery in CY 1987. Mainframes are being modified or designed to take on some features of the supercomputers and new companies with the intent of either competing directly in the supercomputer arena or in providing entry-level systems from which to graduate to supercomputers are springing up everywhere. Even well founded organizations like IBM and CDC are adding machines with vector instructions in their repertoires. Japanese - manufactured supercomputers are also being introduced into the U.S. Will these begin to compete with those of U.S. manufacture. Are they truly competitive. It turns out that both from the hardware and software points of view they may be superior. We may be facing the same problems in supercomputers that we faced in videosystems

  5. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  6. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    Science.gov (United States)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  7. Supercomputing and related national projects in Japan

    International Nuclear Information System (INIS)

    Miura, Kenichi

    1985-01-01

    Japanese supercomputer development activities in the industry and research projects are outlined. Architecture, technology, software, and applications of Fujitsu's Vector Processor Systems are described as an example of Japanese supercomputers. Applications of supercomputers to high energy physics are also discussed. (orig.)

  8. Proceedings of the first energy research power supercomputer users symposium

    International Nuclear Information System (INIS)

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. ''Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University

  9. TOP500 Supercomputers for June 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  10. Centralized supercomputer support for magnetic fusion energy research

    International Nuclear Information System (INIS)

    Fuss, D.; Tull, G.G.

    1984-01-01

    High-speed computers with large memories are vital to magnetic fusion energy research. Magnetohydrodynamic (MHD), transport, equilibrium, Vlasov, particle, and Fokker-Planck codes that model plasma behavior play an important role in designing experimental hardware and interpreting the resulting data, as well as in advancing plasma theory itself. The size, architecture, and software of supercomputers to run these codes are often the crucial constraints on the benefits such computational modeling can provide. Hence, vector computers such as the CRAY-1 offer a valuable research resource. To meet the computational needs of the fusion program, the National Magnetic Fusion Energy Computer Center (NMFECC) was established in 1974 at the Lawrence Livermore National Laboratory. Supercomputers at the central computing facility are linked to smaller computer centers at each of the major fusion laboratories by a satellite communication network. In addition to providing large-scale computing, the NMFECC environment stimulates collaboration and the sharing of computer codes and data among the many fusion researchers in a cost-effective manner

  11. TOP500 Supercomputers for June 2005

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  12. TOP500 Supercomputers for November 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  13. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  14. World's fastest supercomputer opens up to users

    Science.gov (United States)

    Xin, Ling

    2016-08-01

    China's latest supercomputer - Sunway TaihuLight - has claimed the crown as the world's fastest computer according to the latest TOP500 list, released at the International Supercomputer Conference in Frankfurt in late June.

  15. OpenMP Performance on the Columbia Supercomputer

    Science.gov (United States)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  16. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  17. A supercomputing application for reactors core design and optimization

    International Nuclear Information System (INIS)

    Hourcade, Edouard; Gaudier, Fabrice; Arnaud, Gilles; Funtowiez, David; Ammar, Karim

    2010-01-01

    Advanced nuclear reactor designs are often intuition-driven processes where designers first develop or use simplified simulation tools for each physical phenomenon involved. Through the project development, complexity in each discipline increases and implementation of chaining/coupling capabilities adapted to supercomputing optimization process are often postponed to a further step so that task gets increasingly challenging. In the context of renewal in reactor designs, project of first realization are often run in parallel with advanced design although very dependant on final options. As a consequence, the development of tools to globally assess/optimize reactor core features, with the on-going design methods accuracy, is needed. This should be possible within reasonable simulation time and without advanced computer skills needed at project management scale. Also, these tools should be ready to easily cope with modeling progresses in each discipline through project life-time. An early stage development of multi-physics package adapted to supercomputing is presented. The URANIE platform, developed at CEA and based on the Data Analysis Framework ROOT, is very well adapted to this approach. It allows diversified sampling techniques (SRS, LHS, qMC), fitting tools (neuronal networks...) and optimization techniques (genetic algorithm). Also data-base management and visualization are made very easy. In this paper, we'll present the various implementing steps of this core physics tool where neutronics, thermo-hydraulics, and fuel mechanics codes are run simultaneously. A relevant example of optimization of nuclear reactor safety characteristics will be presented. Also, flexibility of URANIE tool will be illustrated with the presentation of several approaches to improve Pareto front quality. (author)

  18. Desktop supercomputer: what can it do?

    Science.gov (United States)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  19. Adaptability of supercomputers to nuclear computations

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Ishiguro, Misako; Matsuura, Toshihiko.

    1983-01-01

    Recently in the field of scientific and technical calculation, the usefulness of supercomputers represented by CRAY-1 has been recognized, and they are utilized in various countries. The rapid computation of supercomputers is based on the function of vector computation. The authors investigated the adaptability to vector computation of about 40 typical atomic energy codes for the past six years. Based on the results of investigation, the adaptability of the function of vector computation that supercomputers have to atomic energy codes, the problem regarding the utilization and the future prospect are explained. The adaptability of individual calculation codes to vector computation is largely dependent on the algorithm and program structure used for the codes. The change to high speed by pipeline vector system, the investigation in the Japan Atomic Energy Research Institute and the results, and the examples of expressing the codes for atomic energy, environmental safety and nuclear fusion by vector are reported. The magnification of speed up for 40 examples was from 1.5 to 9.0. It can be said that the adaptability of supercomputers to atomic energy codes is fairly good. (Kako, I.)

  20. Plane-wave electronic structure calculations on a parallel supercomputer

    International Nuclear Information System (INIS)

    Nelson, J.S.; Plimpton, S.J.; Sears, M.P.

    1993-01-01

    The development of iterative solutions of Schrodinger's equation in a plane-wave (pw) basis over the last several years has coincided with great advances in the computational power available for performing the calculations. These dual developments have enabled many new and interesting condensed matter phenomena to be studied from a first-principles approach. The authors present a detailed description of the implementation on a parallel supercomputer (hypercube) of the first-order equation-of-motion solution to Schrodinger's equation, using plane-wave basis functions and ab initio separable pseudopotentials. By distributing the plane-waves across the processors of the hypercube many of the computations can be performed in parallel, resulting in decreases in the overall computation time relative to conventional vector supercomputers. This partitioning also provides ample memory for large Fast Fourier Transform (FFT) meshes and the storage of plane-wave coefficients for many hundreds of energy bands. The usefulness of the parallel techniques is demonstrated by benchmark timings for both the FFT's and iterations of the self-consistent solution of Schrodinger's equation for different sized Si unit cells of up to 512 atoms

  1. Desktop supercomputer: what can it do?

    International Nuclear Information System (INIS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-01-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  2. Supercomputers and the mathematical modeling of high complexity problems

    International Nuclear Information System (INIS)

    Belotserkovskii, Oleg M

    2010-01-01

    This paper is a review of many works carried out by members of our scientific school in past years. The general principles of constructing numerical algorithms for high-performance computers are described. Several techniques are highlighted and these are based on the method of splitting with respect to physical processes and are widely used in computing nonlinear multidimensional processes in fluid dynamics, in studies of turbulence and hydrodynamic instabilities and in medicine and other natural sciences. The advances and developments related to the new generation of high-performance supercomputing in Russia are presented.

  3. TOP500 Supercomputers for November 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  4. TOP500 Supercomputers for June 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  5. TOP500 Supercomputers for June 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  6. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  7. Status reports of supercomputing astrophysics in Japan

    International Nuclear Information System (INIS)

    Nakamura, Takashi; Nagasawa, Mikio

    1990-01-01

    The Workshop on Supercomputing Astrophysics was held at National Laboratory for High Energy Physics (KEK, Tsukuba) from August 31 to September 2, 1989. More than 40 participants of physicists, astronomers were attendant and discussed many topics in the informal atmosphere. The main purpose of this workshop was focused on the theoretical activities in computational astrophysics in Japan. It was also aimed to promote effective collaboration between the numerical experimentists working on supercomputing technique. The various subjects of the presented papers of hydrodynamics, plasma physics, gravitating systems, radiative transfer and general relativity are all stimulating. In fact, these numerical calculations become possible now in Japan owing to the power of Japanese supercomputer such as HITAC S820, Fujitsu VP400E and NEC SX-2. (J.P.N.)

  8. Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data

    Science.gov (United States)

    Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.

    2018-03-01

    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.

  9. Comments on the parallelization efficiency of the Sunway TaihuLight supercomputer

    OpenAIRE

    Végh, János

    2016-01-01

    In the world of supercomputers, the large number of processors requires to minimize the inefficiencies of parallelization, which appear as a sequential part of the program from the point of view of Amdahl's law. The recently suggested new figure of merit is applied to the recently presented supercomputer, and the timeline of "Top 500" supercomputers is scrutinized using the metric. It is demonstrated, that in addition to the computing performance and power consumption, the new supercomputer i...

  10. Integration of Panda Workload Management System with supercomputers

    Science.gov (United States)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  11. Applications of supercomputing and the utility industry: Calculation of power transfer capabilities

    International Nuclear Information System (INIS)

    Jensen, D.D.; Behling, S.R.; Betancourt, R.

    1990-01-01

    Numerical models and iterative simulation using supercomputers can furnish cost-effective answers to utility industry problems that are all but intractable using conventional computing equipment. An example of the use of supercomputers by the utility industry is the determination of power transfer capability limits for power transmission systems. This work has the goal of markedly reducing the run time of transient stability codes used to determine power distributions following major system disturbances. To date, run times of several hours on a conventional computer have been reduced to several minutes on state-of-the-art supercomputers, with further improvements anticipated to reduce run times to less than a minute. In spite of the potential advantages of supercomputers, few utilities have sufficient need for a dedicated in-house supercomputing capability. This problem is resolved using a supercomputer center serving a geographically distributed user base coupled via high speed communication networks

  12. Supercomputers to transform Science

    CERN Multimedia

    2006-01-01

    "New insights into the structure of space and time, climate modeling, and the design of novel drugs, are but a few of the many research areas that will be transforned by the installation of three supercomputers at the Unversity of Bristol." (1/2 page)

  13. Convex unwraps its first grown-up supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Manuel, T.

    1988-03-03

    Convex Computer Corp.'s new supercomputer family is even more of an industry blockbuster than its first system. At a tenfold jump in performance, it's far from just an incremental upgrade over its first minisupercomputer, the C-1. The heart of the new family, the new C-2 processor, churning at 50 million floating-point operations/s, spawns a group of systems whose performance could pass for some fancy supercomputers-namely those of the Cray Research Inc. family. When added to the C-1, Convex's five new supercomputers create the C series, a six-member product group offering a performance range from 20 to 200 Mflops. They mark an important transition for Convex from a one-product high-tech startup to a multinational company with a wide-ranging product line. It's a tough transition but the Richardson, Texas, company seems to be doing it. The extended product line propels Convex into the upper end of the minisupercomputer class and nudges it into the low end of the big supercomputers. It positions Convex in an uncrowded segment of the market in the $500,000 to $1 million range offering 50 to 200 Mflops of performance. The company is making this move because the minisuper area, which it pioneered, quickly became crowded with new vendors, causing prices and gross margins to drop drastically.

  14. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  15. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  16. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  17. PNNL supercomputer to become largest computing resource on the Grid

    CERN Multimedia

    2002-01-01

    Hewlett Packard announced that the US DOE Pacific Northwest National Laboratory will connect a 9.3-teraflop HP supercomputer to the DOE Science Grid. This will be the largest supercomputer attached to a computer grid anywhere in the world (1 page).

  18. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    Energy Technology Data Exchange (ETDEWEB)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S [Earth Sciences Department. Barcelona Supercomputing Center. Barcelona (Spain); Cuevas, E [Izanaa Atmospheric Research Center. Agencia Estatal de Meteorologia, Tenerife (Spain); Nickovic, S [Atmospheric Research and Environment Branch, World Meteorological Organization, Geneva (Switzerland)], E-mail: carlos.perez@bsc.es

    2009-03-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  19. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    International Nuclear Information System (INIS)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S; Cuevas, E; Nickovic, S

    2009-01-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  20. Supercomputers Of The Future

    Science.gov (United States)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  1. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk (Kitware, Inc., Clifton Park, NY); Fabian, Nathan; Marion, Patrick (Kitware, Inc., Clifton Park, NY); Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  2. ATLAS Software Installation on Supercomputers

    CERN Document Server

    Undrus, Alexander; The ATLAS collaboration

    2018-01-01

    PowerPC and high performance computers (HPC) are important resources for computing in the ATLAS experiment. The future LHC data processing will require more resources than Grid computing, currently using approximately 100,000 cores at well over 100 sites, can provide. Supercomputers are extremely powerful as they use resources of hundreds of thousands CPUs joined together. However their architectures have different instruction sets. ATLAS binary software distributions for x86 chipsets do not fit these architectures, as emulation of these chipsets results in huge performance loss. This presentation describes the methodology of ATLAS software installation from source code on supercomputers. The installation procedure includes downloading the ATLAS code base as well as the source of about 50 external packages, such as ROOT and Geant4, followed by compilation, and rigorous unit and integration testing. The presentation reports the application of this procedure at Titan HPC and Summit PowerPC at Oak Ridge Computin...

  3. JINR supercomputer of the module type for event parallel analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    A model of a supercomputer with 50 million of operations per second is suggested. Its realization allows one to solve JINR data analysis problems for large spectrometers (in particular DELPHY collaboration). The suggested module supercomputer is based on 32-bit commercial available microprocessor with a processing rate of about 1 MFLOPS. The processors are combined by means of VME standard busbars. MicroVAX-11 is a host computer organizing the operation of the system. Data input and output is realized via microVAX-11 computer periphery. Users' software is based on the FORTRAN-77. The supercomputer is connected with a JINR net port and all JINR users get an access to the suggested system

  4. Supercomputers and quantum field theory

    International Nuclear Information System (INIS)

    Creutz, M.

    1985-01-01

    A review is given of why recent simulations of lattice gauge theories have resulted in substantial demands from particle theorists for supercomputer time. These calculations have yielded first principle results on non-perturbative aspects of the strong interactions. An algorithm for simulating dynamical quark fields is discussed. 14 refs

  5. Quantum Hamiltonian Physics with Supercomputers

    International Nuclear Information System (INIS)

    Vary, James P.

    2014-01-01

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed

  6. Quantum Hamiltonian Physics with Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Vary, James P.

    2014-06-15

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed.

  7. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  8. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu

    2013-07-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  9. Computational plasma physics and supercomputers

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1984-09-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular codes, but parallel processing poses new coding difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematics

  10. Mistral Supercomputer Job History Analysis

    OpenAIRE

    Zasadziński, Michał; Muntés-Mulero, Victor; Solé, Marc; Ludwig, Thomas

    2018-01-01

    In this technical report, we show insights and results of operational data analysis from petascale supercomputer Mistral, which is ranked as 42nd most powerful in the world as of January 2018. Data sources include hardware monitoring data, job scheduler history, topology, and hardware information. We explore job state sequences, spatial distribution, and electric power patterns.

  11. Interactive real-time nuclear plant simulations on a UNIX based supercomputer

    International Nuclear Information System (INIS)

    Behling, S.R.

    1990-01-01

    Interactive real-time nuclear plant simulations are critically important to train nuclear power plant engineers and operators. In addition, real-time simulations can be used to test the validity and timing of plant technical specifications and operational procedures. To accurately and confidently simulate a nuclear power plant transient in real-time, sufficient computer resources must be available. Since some important transients cannot be simulated using preprogrammed responses or non-physical models, commonly used simulation techniques may not be adequate. However, the power of a supercomputer allows one to accurately calculate the behavior of nuclear power plants even during very complex transients. Many of these transients can be calculated in real-time or quicker on the fastest supercomputers. The concept of running interactive real-time nuclear power plant transients on a supercomputer has been tested. This paper describes the architecture of the simulation program, the techniques used to establish real-time synchronization, and other issues related to the use of supercomputers in a new and potentially very important area. (author)

  12. Advanced Architectures for Astrophysical Supercomputing

    Science.gov (United States)

    Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.

    2010-12-01

    Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.

  13. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  14. Introduction to Reconfigurable Supercomputing

    CERN Document Server

    Lanzagorta, Marco; Rosenberg, Robert

    2010-01-01

    This book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPe who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigur

  15. Numerical aerodynamic simulation (NAS)

    International Nuclear Information System (INIS)

    Peterson, V.L.; Ballhaus, W.F. Jr.; Bailey, F.R.

    1984-01-01

    The Numerical Aerodynamic Simulation (NAS) Program is designed to provide a leading-edge computational capability to the aerospace community. It was recognized early in the program that, in addition to more advanced computers, the entire computational process ranging from problem formulation to publication of results needed to be improved to realize the full impact of computational aerodynamics. Therefore, the NAS Program has been structured to focus on the development of a complete system that can be upgraded periodically with minimum impact on the user and on the inventory of applications software. The implementation phase of the program is now under way. It is based upon nearly 8 yr of study and should culminate in an initial operational capability before 1986. The objective of this paper is fivefold: 1) to discuss the factors motivating the NAS program, 2) to provide a history of the activity, 3) to describe each of the elements of the processing-system network, 4) to outline the proposed allocation of time to users of the facility, and 5) to describe some of the candidate problems being considered for the first benchmark codes

  16. SUPERCOMPUTERS FOR AIDING ECONOMIC PROCESSES WITH REFERENCE TO THE FINANCIAL SECTOR

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2014-12-01

    Full Text Available The article discusses the use of supercomputers to support business processes with particular emphasis on the financial sector. A reference was made to the selected projects that support economic development. In particular, we propose the use of supercomputers to perform artificial intel-ligence methods in banking. The proposed methods combined with modern technology enables a significant increase in the competitiveness of enterprises and banks by adding new functionality.

  17. The Pawsey Supercomputer geothermal cooling project

    Science.gov (United States)

    Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.

    2010-12-01

    The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air

  18. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jacobsen, Douglas W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ringler, Todd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  19. Visualization environment of the large-scale data of JAEA's supercomputer system

    Energy Technology Data Exchange (ETDEWEB)

    Sakamoto, Kensaku [Japan Atomic Energy Agency, Center for Computational Science and e-Systems, Tokai, Ibaraki (Japan); Hoshi, Yoshiyuki [Research Organization for Information Science and Technology (RIST), Tokai, Ibaraki (Japan)

    2013-11-15

    On research and development of various fields of nuclear energy, visualization of calculated data is especially useful to understand the result of simulation in an intuitive way. Many researchers who run simulations on the supercomputer in Japan Atomic Energy Agency (JAEA) are used to transfer calculated data files from the supercomputer to their local PCs for visualization. In recent years, as the size of calculated data has gotten larger with improvement of supercomputer performance, reduction of visualization processing time as well as efficient use of JAEA network is being required. As a solution, we introduced a remote visualization system which has abilities to utilize parallel processors on the supercomputer and to reduce the usage of network resources by transferring data of intermediate visualization process. This paper reports a study on the performance of image processing with the remote visualization system. The visualization processing time is measured and the influence of network speed is evaluated by varying the drawing mode, the size of visualization data and the number of processors. Based on this study, a guideline for using the remote visualization system is provided to show how the system can be used effectively. An upgrade policy of the next system is also shown. (author)

  20. QCD on the BlueGene/L Supercomputer

    International Nuclear Information System (INIS)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-01-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented

  1. QCD on the BlueGene/L Supercomputer

    Science.gov (United States)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-03-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.

  2. Development of seismic tomography software for hybrid supercomputers

    Science.gov (United States)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on

  3. Graphics supercomputer for computational fluid dynamics research

    Science.gov (United States)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  4. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    Science.gov (United States)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  5. A workbench for tera-flop supercomputing

    International Nuclear Information System (INIS)

    Resch, M.M.; Kuester, U.; Mueller, M.S.; Lang, U.

    2003-01-01

    Supercomputers currently reach a peak performance in the range of TFlop/s. With but one exception - the Japanese Earth Simulator - none of these systems has so far been able to also show a level of sustained performance for a variety of applications that comes close to the peak performance. Sustained TFlop/s are therefore rarely seen. The reasons are manifold and are well known: Bandwidth and latency both for main memory and for the internal network are the key internal technical problems. Cache hierarchies with large caches can bring relief but are no remedy to the problem. However, there are not only technical problems that inhibit the full exploitation by scientists of the potential of modern supercomputers. More and more organizational issues come to the forefront. This paper shows the approach of the High Performance Computing Center Stuttgart (HLRS) to deliver a sustained performance of TFlop/s for a wide range of applications from a large group of users spread over Germany. The core of the concept is the role of the data. Around this we design a simulation workbench that hides the complexity of interacting computers, networks and file systems from the user. (authors)

  6. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Directory of Open Access Journals (Sweden)

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  7. KfK-seminar series on supercomputing und visualization from May till September 1992

    International Nuclear Information System (INIS)

    Hohenhinnebusch, W.

    1993-05-01

    During the period of may 1992 to september 1992 a series of seminars was held at KfK on several topics of supercomputing in different fields of application. The aim was to demonstrate the importance of supercomputing and visualization in numerical simulations of complex physical and technical phenomena. This report contains the collection of all submitted seminar papers. (orig./HP) [de

  8. Computational plasma physics and supercomputers. Revision 1

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1985-01-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular models, but parallel processing poses new programming difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematical models

  9. Application of Supercomputer Technologies for Simulation Of Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Vladimir Valentinovich Okrepilov

    2015-06-01

    Full Text Available To date, an extensive experience has been accumulated in investigation of problems related to quality, assessment of management systems, modeling of economic system sustainability. The performed studies have created a basis for development of a new research area — Economics of Quality. Its tools allow to use opportunities of model simulation for construction of the mathematical models adequately reflecting the role of quality in natural, technical, social regularities of functioning of the complex socio-economic systems. Extensive application and development of models, and also system modeling with use of supercomputer technologies, on our deep belief, will bring the conducted research of socio-economic systems to essentially new level. Moreover, the current scientific research makes a significant contribution to model simulation of multi-agent social systems and that is not less important, it belongs to the priority areas in development of science and technology in our country. This article is devoted to the questions of supercomputer technologies application in public sciences, first of all, — regarding technical realization of the large-scale agent-focused models (AFM. The essence of this tool is that owing to the power computer increase it has become possible to describe the behavior of many separate fragments of a difficult system, as socio-economic systems are. The article also deals with the experience of foreign scientists and practicians in launching the AFM on supercomputers, and also the example of AFM developed in CEMI RAS, stages and methods of effective calculating kernel display of multi-agent system on architecture of a modern supercomputer will be analyzed. The experiments on the basis of model simulation on forecasting the population of St. Petersburg according to three scenarios as one of the major factors influencing the development of socio-economic system and quality of life of the population are presented in the

  10. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  11. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  12. Tryton Supercomputer Capabilities for Analysis of Massive Data Streams

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2015-09-01

    Full Text Available The recently deployed supercomputer Tryton, located in the Academic Computer Center of Gdansk University of Technology, provides great means for massive parallel processing. Moreover, the status of the Center as one of the main network nodes in the PIONIER network enables the fast and reliable transfer of data produced by miscellaneous devices scattered in the area of the whole country. The typical examples of such data are streams containing radio-telescope and satellite observations. Their analysis, especially with real-time constraints, can be challenging and requires the usage of dedicated software components. We propose a solution for such parallel analysis using the supercomputer, supervised by the KASKADA platform, which with the conjunction with immerse 3D visualization techniques can be used to solve problems such as pulsar detection and chronometric or oil-spill simulation on the sea surface.

  13. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  14. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  15. Guide to dataflow supercomputing basic concepts, case studies, and a detailed example

    CERN Document Server

    Milutinovic, Veljko; Trifunovic, Nemanja; Giorgi, Roberto

    2015-01-01

    This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; revie

  16. Research to application: Supercomputing trends for the 90's - Opportunities for interdisciplinary computations

    International Nuclear Information System (INIS)

    Shankar, V.

    1991-01-01

    The progression of supercomputing is reviewed from the point of view of computational fluid dynamics (CFD), and multidisciplinary problems impacting the design of advanced aerospace configurations are addressed. The application of full potential and Euler equations to transonic and supersonic problems in the 70s and early 80s is outlined, along with Navier-Stokes computations widespread during the late 80s and early 90s. Multidisciplinary computations currently in progress are discussed, including CFD and aeroelastic coupling for both static and dynamic flexible computations, CFD, aeroelastic, and controls coupling for flutter suppression and active control, and the development of a computational electromagnetics technology based on CFD methods. Attention is given to computational challenges standing in a way of the concept of establishing a computational environment including many technologies. 40 refs

  17. Cellular-automata supercomputers for fluid-dynamics modeling

    International Nuclear Information System (INIS)

    Margolus, N.; Toffoli, T.; Vichniac, G.

    1986-01-01

    We report recent developments in the modeling of fluid dynamics, and give experimental results (including dynamical exponents) obtained using cellular automata machines. Because of their locality and uniformity, cellular automata lend themselves to an extremely efficient physical realization; with a suitable architecture, an amount of hardware resources comparable to that of a home computer can achieve (in the simulation of cellular automata) the performance of a conventional supercomputer

  18. The TeraGyroid Experiment – Supercomputing 2003

    Directory of Open Access Journals (Sweden)

    R.J. Blake

    2005-01-01

    Full Text Available Amphiphiles are molecules with hydrophobic tails and hydrophilic heads. When dispersed in solvents, they self assemble into complex mesophases including the beautiful cubic gyroid phase. The goal of the TeraGyroid experiment was to study defect pathways and dynamics in these gyroids. The UK's supercomputing and USA's TeraGrid facilities were coupled together, through a dedicated high-speed network, into a single computational Grid for research work that peaked around the Supercomputing 2003 conference. The gyroids were modeled using lattice Boltzmann methods with parameter spaces explored using many 1283 and 3grid point simulations, this data being used to inform the world's largest three-dimensional time dependent simulation with 10243-grid points. The experiment generated some 2 TBytes of useful data. In terms of Grid technology the project demonstrated the migration of simulations (using Globus middleware to and fro across the Atlantic exploiting the availability of resources. Integration of the systems accelerated the time to insight. Distributed visualisation of the output datasets enabled the parameter space of the interactions within the complex fluid to be explored from a number of sites, informed by discourse over the Access Grid. The project was sponsored by EPSRC (UK and NSF (USA with trans-Atlantic optical bandwidth provided by British Telecommunications.

  19. Wavelet transform-vector quantization compression of supercomputer ocean model simulation output

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J N; Brislawn, C M

    1992-11-12

    We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.

  20. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  1. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; De, K; Oleynik, D; Jha, S; Wells, J

    2016-01-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  2. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Esteban [University of Pittsburgh; Ni, Xiang [University of Illinois at Urbana-Champaign; Jones, Terry R [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  3. Advanced computers and simulation

    International Nuclear Information System (INIS)

    Ryne, R.D.

    1993-01-01

    Accelerator physicists today have access to computers that are far more powerful than those available just 10 years ago. In the early 1980's, desktop workstations performed less one million floating point operations per second (Mflops), and the realized performance of vector supercomputers was at best a few hundred Mflops. Today vector processing is available on the desktop, providing researchers with performance approaching 100 Mflops at a price that is measured in thousands of dollars. Furthermore, advances in Massively Parallel Processors (MPP) have made performance of over 10 gigaflops a reality, and around mid-decade MPPs are expected to be capable of teraflops performance. Along with advances in MPP hardware, researchers have also made significant progress in developing algorithms and software for MPPS. These changes have had, and will continue to have, a significant impact on the work of computational accelerator physicists. Now, instead of running particle simulations with just a few thousand particles, we can perform desktop simulations with tens of thousands of simulation particles, and calculations with well over 1 million particles are being performed on MPPs. In the area of computational electromagnetics, simulations that used to be performed only on vector supercomputers now run in several hours on desktop workstations, and researchers are hoping to perform simulations with over one billion mesh points on future MPPs. In this paper we will discuss the latest advances, and what can be expected in the near future, in hardware, software and applications codes for advanced simulation of particle accelerators

  4. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  5. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  6. Development of a high performance eigensolver on the peta-scale next generation supercomputer system

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Yamada, Susumu; Machida, Masahiko

    2010-01-01

    For the present supercomputer systems, a multicore and multisocket processors are necessary to build a system, and choice of interconnection is essential. In addition, for effective development of a new code, high performance, scalable, and reliable numerical software is one of the key items. ScaLAPACK and PETSc are well-known software on distributed memory parallel computer systems. It is needless to say that highly tuned software towards new architecture like many-core processors must be chosen for real computation. In this study, we present a high-performance and high-scalable eigenvalue solver towards the next-generation supercomputer system, so called 'K-computer' system. We have developed two versions, the standard version (eigen s) and enhanced performance version (eigen sx), which are developed on the T2K cluster system housed at University of Tokyo. Eigen s employs the conventional algorithms; Householder tridiagonalization, divide and conquer (DC) algorithm, and Householder back-transformation. They are carefully implemented with blocking technique and flexible two-dimensional data-distribution to reduce the overhead of memory traffic and data transfer, respectively. Eigen s performs excellently on the T2K system with 4096 cores (theoretical peak is 37.6 TFLOPS), and it shows fine performance 3.0 TFLOPS with a two hundred thousand dimensional matrix. The enhanced version, eigen sx, uses more advanced algorithms; the narrow-band reduction algorithm, DC for band matrices, and the block Householder back-transformation with WY-representation. Even though this version is still on a test stage, it shows 4.7 TFLOPS with the same dimensional matrix on eigen s. (author)

  7. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    International Nuclear Information System (INIS)

    Cabrillo, I; Cabellos, L; Marco, J; Fernandez, J; Gonzalez, I

    2014-01-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  8. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    International Nuclear Information System (INIS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-01-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers. (paper)

  9. Problem solving in nuclear engineering using supercomputers

    International Nuclear Information System (INIS)

    Schmidt, F.; Scheuermann, W.; Schatz, A.

    1987-01-01

    The availability of supercomputers enables the engineer to formulate new strategies for problem solving. One such strategy is the Integrated Planning and Simulation System (IPSS). With the integrated systems, simulation models with greater consistency and good agreement with actual plant data can be effectively realized. In the present work some of the basic ideas of IPSS are described as well as some of the conditions necessary to build such systems. Hardware and software characteristics as realized are outlined. (orig.) [de

  10. FPS scientific and supercomputers computers in chemistry

    International Nuclear Information System (INIS)

    Curington, I.J.

    1987-01-01

    FPS Array Processors, scientific computers, and highly parallel supercomputers are used in nearly all aspects of compute-intensive computational chemistry. A survey is made of work utilizing this equipment, both published and current research. The relationship of the computer architecture to computational chemistry is discussed, with specific reference to Molecular Dynamics, Quantum Monte Carlo simulations, and Molecular Graphics applications. Recent installations of the FPS T-Series are highlighted, and examples of Molecular Graphics programs running on the FPS-5000 are shown

  11. Visualizing quantum scattering on the CM-2 supercomputer

    International Nuclear Information System (INIS)

    Richardson, J.L.

    1991-01-01

    We implement parallel algorithms for solving the time-dependent Schroedinger equation on the CM-2 supercomputer. These methods are unconditionally stable as well as unitary at each time step and have the advantage of being spatially local and explicit. We show how to visualize the dynamics of quantum scattering using techniques for visualizing complex wave functions. Several scattering problems are solved to demonstrate the use of these methods. (orig.)

  12. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Wenaus, Torre

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for jo...

  13. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  14. Supercomputer algorithms for reactivity, dynamics and kinetics of small molecules

    International Nuclear Information System (INIS)

    Lagana, A.

    1989-01-01

    Even for small systems, the accurate characterization of reactive processes is so demanding of computer resources as to suggest the use of supercomputers having vector and parallel facilities. The full advantages of vector and parallel architectures can sometimes be obtained by simply modifying existing programs, vectorizing the manipulation of vectors and matrices, and requiring the parallel execution of independent tasks. More often, however, a significant time saving can be obtained only when the computer code undergoes a deeper restructuring, requiring a change in the computational strategy or, more radically, the adoption of a different theoretical treatment. This book discusses supercomputer strategies based upon act and approximate methods aimed at calculating the electronic structure and the reactive properties of small systems. The book shows how, in recent years, intense design activity has led to the ability to calculate accurate electronic structures for reactive systems, exact and high-level approximations to three-dimensional reactive dynamics, and to efficient directive and declaratory software for the modelling of complex systems

  15. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  16. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  17. Simultaneous Estimation of Hydrochlorothiazide, Hydralazine Hydrochloride, and Reserpine Using PCA, NAS, and NAS-PCA.

    Science.gov (United States)

    Sharma, Chetan; Badyal, Pragya Nand; Rawal, Ravindra K

    2015-01-01

    In this study, new and feasible UV-visible spectrophotometric and multivariate spectrophotometric methods were described for the simultaneous determination of hydrochlorothiazide (HCTZ), hydralazine hydrochloride (H.HCl), and reserpine (RES) in combined pharmaceutical tablets. Methanol was used as a solvent for analysis and the whole UV region was scanned from 200-400 nm. The resolution was obtained by using multivariate methods such as the net analyte signal method (NAS), principal component analysis (PCA), and net analyte signal-principal component analysis (NAS-PCA) applied to the UV spectra of the mixture. The results obtained from all of the three methods were compared. NAS-PCA showed a lot of resolved data as compared to NAS and PCA. Thus, the NAS-PCA technique is a combination of NAS and PCA methods which is advantageous to obtain the information from overlapping results.

  18. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Supercomputing plays a major role in many areas of science and engineering, and it has had tremendous impact for decades in areas such as aerospace, defense, energy,...

  19. Advances in software science and technology

    CERN Document Server

    Hikita, Teruo; Kakuda, Hiroyasu

    1993-01-01

    Advances in Software Science and Technology, Volume 4 provides information pertinent to the advancement of the science and technology of computer software. This book discusses the various applications for computer systems.Organized into two parts encompassing 10 chapters, this volume begins with an overview of the historical survey of programming languages for vector/parallel computers in Japan and describes compiling methods for supercomputers in Japan. This text then explains the model of a Japanese software factory, which is presented by the logical configuration that has been satisfied by

  20. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  1. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  2. Intelligent Personal Supercomputer for Solving Scientific and Technical Problems

    Directory of Open Access Journals (Sweden)

    Khimich, O.M.

    2016-09-01

    Full Text Available New domestic intellіgent personal supercomputer of hybrid architecture Inparkom_pg for the mathematical modeling of processes in the defense industry, engineering, construction, etc. was developed. Intelligent software for the automatic research and tasks of computational mathematics with approximate data of different structures was designed. Applied software to provide mathematical modeling problems in construction, welding and filtration processes was implemented.

  3. Supercomputers and the future of computational atomic scattering physics

    International Nuclear Information System (INIS)

    Younger, S.M.

    1989-01-01

    The advent of the supercomputer has opened new vistas for the computational atomic physicist. Problems of hitherto unparalleled complexity are now being examined using these new machines, and important connections with other fields of physics are being established. This talk briefly reviews some of the most important trends in computational scattering physics and suggests some exciting possibilities for the future. 7 refs., 2 figs

  4. Lipoproteínas: metabolismo y lipoproteínas aterogénicas

    OpenAIRE

    Carlos Carvajal

    2014-01-01

    Los lípidos viajan en sangre en diferentes partículas conteniendo lípidos y proteínas llamadas lipoproteínas. Hay cuatro clases de lipoproteínas en sangre: quilomicrones, VLDL, LDL y HDL. Los quilomicrones transportan triglicéridos (TAG) a tejidos vitales (corazón, musculo esquelético y tejido adiposo). El hígado secreta VLDL que redistribuye TAG al tejido adiposo, corazón y músculo esquelético. LDL transporta colesterol hacia las células y HDL remueve colesterol de las células de vuelta al h...

  5. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  6. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen-Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2018-05-15

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time and supports DMA functionality allowing for parallel processing message-passing.

  7. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    Science.gov (United States)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  8. UAS-NAS Stakeholder Feedback Report

    Science.gov (United States)

    Randall, Debra; Murphy, Jim; Grindle, Laurie

    2016-01-01

    The need to fly UAS in the NAS to perform missions of vital importance to national security and defense, emergency management, science, and to enable commercial applications has been continually increasing over the past few years. To address this need, the NASA Aeronautics Research Mission Directorate (ARMD) Integrated Aviation Systems Program (IASP) formulated and funded the Unmanned Aircraft Systems (UAS) Integration in the National Airspace System (NAS) Project (hereafter referred to as UAS-NAS Project) from 2011 to 2016. The UAS-NAS Project identified the following need statement: The UAS community needs routine access to the global airspace for all classes of UAS. The Project identified the following goal: To provide research findings to reduce technical barriers associated with integrating UAS into the NAS utilizing integrated system level tests in a relevant environment. This report provides a summary of the collaborations between the UAS-NAS Project and its primary stakeholders and how the Project applied and incorporated the feedback.

  9. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-05-15

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation.

  10. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    International Nuclear Information System (INIS)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho

    2008-01-01

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation

  11. Performance evaluation of scientific programs on advanced architecture computers

    International Nuclear Information System (INIS)

    Walker, D.W.; Messina, P.; Baille, C.F.

    1988-01-01

    Recently a number of advanced architecture machines have become commercially available. These new machines promise better cost-performance then traditional computers, and some of them have the potential of competing with current supercomputers, such as the Cray X/MP, in terms of maximum performance. This paper describes an on-going project to evaluate a broad range of advanced architecture computers using a number of complete scientific application programs. The computers to be evaluated include distributed- memory machines such as the NCUBE, INTEL and Caltech/JPL hypercubes, and the MEIKO computing surface, shared-memory, bus architecture machines such as the Sequent Balance and the Alliant, very long instruction word machines such as the Multiflow Trace 7/200 computer, traditional supercomputers such as the Cray X.MP and Cray-2, and SIMD machines such as the Connection Machine. Currently 11 application codes from a number of scientific disciplines have been selected, although it is not intended to run all codes on all machines. Results are presented for two of the codes (QCD and missile tracking), and future work is proposed

  12. Design and performance characterization of electronic structure calculations on massively parallel supercomputers

    DEFF Research Database (Denmark)

    Romero, N. A.; Glinsvad, Christian; Larsen, Ask Hjorth

    2013-01-01

    Density function theory (DFT) is the most widely employed electronic structure method because of its favorable scaling with system size and accuracy for a broad range of molecular and condensed-phase systems. The advent of massively parallel supercomputers has enhanced the scientific community...

  13. Computational Science with the Titan Supercomputer: Early Outcomes and Lessons Learned

    Science.gov (United States)

    Wells, Jack

    2014-03-01

    Modeling and simulation with petascale computing has supercharged the process of innovation and understanding, dramatically accelerating time-to-insight and time-to-discovery. This presentation will focus on early outcomes from the Titan supercomputer at the Oak Ridge National Laboratory. Titan has over 18,000 hybrid compute nodes consisting of both CPUs and GPUs. In this presentation, I will discuss the lessons we have learned in deploying Titan and preparing applications to move from conventional CPU architectures to a hybrid machine. I will present early results of materials applications running on Titan and the implications for the research community as we prepare for exascale supercomputer in the next decade. Lastly, I will provide an overview of user programs at the Oak Ridge Leadership Computing Facility with specific information how researchers may apply for allocations of computing resources. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  14. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00300320; Klimentov, Alexei; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Vaniachine, Alexandre; Wenaus, Torre; Schovancova, Jaroslava

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real time, information about unused...

  15. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration; Klimentov, Alexei; Oleynik, Danila; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently uses more than 100,000 cores at well over 100 Grid sites with a peak performance of 0.3 petaFLOPS, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real tim...

  16. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    Science.gov (United States)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  17. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    Science.gov (United States)

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  18. Explaining the gap between theoretical peak performance and real performance for supercomputer architectures

    International Nuclear Information System (INIS)

    Schoenauer, W.; Haefner, H.

    1993-01-01

    The basic architectures of vector and parallel computers with their properties are presented. Then the memory size and the arithmetic operations in the context of memory bandwidth are discussed. For the exemplary discussion of a single operation micro-measurements of the vector triad for the IBM 3090 VF and the CRAY Y-MP/8 are presented. They reveal the details of the losses for a single operation. Then we analyze the global performance of a whole supercomputer by identifying reduction factors that bring down the theoretical peak performance to the poor real performance. The responsibilities of the manufacturer and of the user for these losses are dicussed. Then the price-performance ratio for different architectures in a snapshot of January 1991 is briefly mentioned. Finally some remarks to a user-friendly architecture for a supercomputer will be made. (orig.)

  19. Computational chemistry research

    Science.gov (United States)

    Levin, Eugene

    1987-01-01

    Task 41 is composed of two parts: (1) analysis and design studies related to the Numerical Aerodynamic Simulation (NAS) Extended Operating Configuration (EOC) and (2) computational chemistry. During the first half of 1987, Dr. Levin served as a member of an advanced system planning team to establish the requirements, goals, and principal technical characteristics of the NAS EOC. A paper entitled 'Scaling of Data Communications for an Advanced Supercomputer Network' is included. The high temperature transport properties (such as viscosity, thermal conductivity, etc.) of the major constituents of air (oxygen and nitrogen) were correctly determined. The results of prior ab initio computer solutions of the Schroedinger equation were combined with the best available experimental data to obtain complete interaction potentials for both neutral and ion-atom collision partners. These potentials were then used in a computer program to evaluate the collision cross-sections from which the transport properties could be determined. A paper entitled 'High Temperature Transport Properties of Air' is included.

  20. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-01-01

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  1. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  2. An efficient implementation of a backpropagation learning algorithm on quadrics parallel supercomputer

    International Nuclear Information System (INIS)

    Taraglio, S.; Massaioli, F.

    1995-08-01

    A parallel implementation of a library to build and train Multi Layer Perceptrons via the Back Propagation algorithm is presented. The target machine is the SIMD massively parallel supercomputer Quadrics. Performance measures are provided on three different machines with different number of processors, for two network examples. A sample source code is given

  3. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  4. Supercomputing Centers and Electricity Service Providers

    DEFF Research Database (Denmark)

    Patki, Tapasya; Bates, Natalie; Ghatikar, Girish

    2016-01-01

    from a detailed, quantitative survey-based analysis and compare the perspectives of the European grid and SCs to the ones of the United States (US). We then show that contrary to the expectation, SCs in the US are more open toward cooperating and developing demand-management strategies with their ESPs......Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates...... this problem. In order to develop a symbiotic relationship between the SCs and their ESPs and to support effective power management at all levels, it is critical to understand and analyze how the existing relationships were formed and how these are expected to evolve. In this paper, we first present results...

  5. Integration of PanDA workload management system with Titan supercomputer at OLCF

    Science.gov (United States)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  6. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  7. Using the LANSCE irradiation facility to predict the number of fatal soft errors in one of the world's fastest supercomputers

    International Nuclear Information System (INIS)

    Michalak, S.E.; Harris, K.W.; Hengartner, N.W.; Takala, B.E.; Wender, S.A.

    2005-01-01

    Los Alamos National Laboratory (LANL) is home to the Los Alamos Neutron Science Center (LANSCE). LANSCE is a unique facility because its neutron spectrum closely mimics the neutron spectrum at terrestrial and aircraft altitudes, but is many times more intense. Thus, LANSCE provides an ideal setting for accelerated testing of semiconductor and other devices that are susceptible to cosmic ray induced neutrons. Many industrial companies use LANSCE to estimate device susceptibility to cosmic ray induced neutrons, and it has also been used to test parts from one of LANL's supercomputers, the ASC (Advanced Simulation and Computing Program) Q. This paper discusses our use of the LANSCE facility to study components in Q including a comparison with failure data from Q

  8. Heat dissipation computations of a HVDC ground electrode using a supercomputer

    International Nuclear Information System (INIS)

    Greiss, H.; Mukhedkar, D.; Lagace, P.J.

    1990-01-01

    This paper reports on the temperature, of soil surrounding a High Voltage Direct Current (HVDC) toroidal ground electrode of practical dimensions, in both homogeneous and non-homogeneous soils that was computed at incremental points in time using finite difference methods on a supercomputer. Curves of the response were computed and plotted at several locations within the soil in the vicinity of the ground electrode for various values of the soil parameters

  9. Argonne National Lab deploys Force10 networks' massively dense ethernet switch for supercomputing cluster

    CERN Multimedia

    2003-01-01

    "Force10 Networks, Inc. today announced that Argonne National Laboratory (Argonne, IL) has successfully deployed Force10 E-Series switch/routers to connect to the TeraGrid, the world's largest supercomputing grid, sponsored by the National Science Foundation (NSF)" (1/2 page).

  10. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  11. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    Science.gov (United States)

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  12. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    Science.gov (United States)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  13. Coherent 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an Optimal Supercomputer Optical Switch Fabric

    DEFF Research Database (Denmark)

    Karinou, Fotini; Borkowski, Robert; Zibar, Darko

    2013-01-01

    We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates.......We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates....

  14. Cooperative visualization and simulation in a supercomputer environment

    International Nuclear Information System (INIS)

    Ruehle, R.; Lang, U.; Wierse, A.

    1993-01-01

    The article takes a closer look on the requirements being imposed by the idea to integrate all the components into a homogeneous software environment. To this end several methods for the distribtuion of applications in dependence of certain problem types are discussed. The currently available methods at the University of Stuttgart Computer Center for the distribution of applications are further explained. Finally the aims and characteristics of a European sponsored project, called PAGEIN, are explained, which fits perfectly into the line of developments at RUS. The aim of the project is to experiment with future cooperative working modes of aerospace scientists in a high speed distributed supercomputing environment. Project results will have an impact on the development of real future scientific application environments. (orig./DG)

  15. Feynman diagrams sampling for quantum field theories on the QPACE 2 supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Rappl, Florian

    2016-08-01

    This work discusses the application of Feynman diagram sampling in quantum field theories. The method uses a computer simulation to sample the diagrammatic space obtained in a series expansion. For running large physical simulations powerful computers are obligatory, effectively splitting the thesis in two parts. The first part deals with the method of Feynman diagram sampling. Here the theoretical background of the method itself is discussed. Additionally, important statistical concepts and the theory of the strong force, quantum chromodynamics, are introduced. This sets the context of the simulations. We create and evaluate a variety of models to estimate the applicability of diagrammatic methods. The method is then applied to sample the perturbative expansion of the vertex correction. In the end we obtain the value for the anomalous magnetic moment of the electron. The second part looks at the QPACE 2 supercomputer. This includes a short introduction to supercomputers in general, as well as a closer look at the architecture and the cooling system of QPACE 2. Guiding benchmarks of the InfiniBand network are presented. At the core of this part, a collection of best practices and useful programming concepts are outlined, which enables the development of efficient, yet easily portable, applications for the QPACE 2 system.

  16. Use of high performance networks and supercomputers for real-time flight simulation

    Science.gov (United States)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  17. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  18. Advancements in simulations of lattice quantum chromodynamics

    International Nuclear Information System (INIS)

    Lippert, T.

    2008-01-01

    An introduction to lattice QCD with emphasis on advanced fermion formulations and their simulation is given. In particular, overlap fermions will be presented, a quite novel fermionic discretization scheme that is able to exactly preserve chiral symmetry on the lattice. I will discuss efficiencies of state-of-the-art algorithms on highly scalable supercomputers and I will show that, due to many algorithmic improvements, overlap simulations will soon become feasible for realistic physical lattice sizes. Finally I am going to sketch the status of some current large scale lattice QCD simulations. (author)

  19. A fast random number generator for the Intel Paragon supercomputer

    Science.gov (United States)

    Gutbrod, F.

    1995-06-01

    A pseudo-random number generator is presented which makes optimal use of the architecture of the i860-microprocessor and which is expected to have a very long period. It is therefore a good candidate for use on the parallel supercomputer Paragon XP. In the assembler version, it needs 6.4 cycles for a real∗4 random number. There is a FORTRAN routine which yields identical numbers up to rare and minor rounding discrepancies, and it needs 28 cycles. The FORTRAN performance on other microprocessors is somewhat better. Arguments for the quality of the generator and some numerical tests are given.

  20. OlyMPUS - The Ontology-based Metadata Portal for Unified Semantics

    Science.gov (United States)

    Huffer, E.; Gleason, J. L.

    2015-12-01

    The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS), funded by the NASA Earth Science Technology Office Advanced Information Systems Technology program, is an end-to-end system designed to support data consumers and data providers, enabling the latter to register their data sets and provision them with the semantically rich metadata that drives the Ontology-Driven Interactive Search Environment for Earth Sciences (ODISEES). OlyMPUS leverages the semantics and reasoning capabilities of ODISEES to provide data producers with a semi-automated interface for producing the semantically rich metadata needed to support ODISEES' data discovery and access services. It integrates the ODISEES metadata search system with multiple NASA data delivery tools to enable data consumers to create customized data sets for download to their computers, or for NASA Advanced Supercomputing (NAS) facility registered users, directly to NAS storage resources for access by applications running on NAS supercomputers. A core function of NASA's Earth Science Division is research and analysis that uses the full spectrum of data products available in NASA archives. Scientists need to perform complex analyses that identify correlations and non-obvious relationships across all types of Earth System phenomena. Comprehensive analytics are hindered, however, by the fact that many Earth science data products are disparate and hard to synthesize. Variations in how data are collected, processed, gridded, and stored, create challenges for data interoperability and synthesis, which are exacerbated by the sheer volume of available data. Robust, semantically rich metadata can support tools for data discovery and facilitate machine-to-machine transactions with services such as data subsetting, regridding, and reformatting. Such capabilities are critical to enabling the research activities integral to NASA's strategic plans. However, as metadata requirements increase and competing standards emerge

  1. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  2. Lie. kū́nas

    Directory of Open Access Journals (Sweden)

    Simas Karaliūnas

    2011-12-01

    Full Text Available LIТН. kū́nas “BODY”SummaryThe cognates of Lith. kū́nas “body” and Latv. kûnis (kи̃пе, kũņа “body; chrysalis; caterpil­lar of a butterfly; bee pupae” are supposed to be Lith. kūпа “carrion”, pa-kū́nė “sore, furuncle; upper lamella, a layer under the roots”, Latv. kипа “wart, excrescence”, kunis “bottom of a sheaf” and others. Lith. kū́nas, kūпа may represent substantivized forms of the adjective Latv. kûns“round, obese, stout”, while Latv. kûnis, kũņа, kūne seem to be derivatives of the suffixes *-o-*-ā-, *-ē-.

  3. Unstructured Adaptive Meshes: Bad for Your Memory?

    Science.gov (United States)

    Biswas, Rupak; Feng, Hui-Yu; VanderWijngaart, Rob

    2003-01-01

    This viewgraph presentation explores the need for a NASA Advanced Supercomputing (NAS) parallel benchmark for problems with irregular dynamical memory access. This benchmark is important and necessary because: 1) Problems with localized error source benefit from adaptive nonuniform meshes; 2) Certain machines perform poorly on such problems; 3) Parallel implementation may provide further performance improvement but is difficult. Some examples of problems which use irregular dynamical memory access include: 1) Heat transfer problem; 2) Heat source term; 3) Spectral element method; 4) Base functions; 5) Elemental discrete equations; 6) Global discrete equations. Nonconforming Mesh and Mortar Element Method are covered in greater detail in this presentation.

  4. Advanced technology composite aircraft structures

    Science.gov (United States)

    Ilcewicz, Larry B.; Walker, Thomas H.

    1991-01-01

    Work performed during the 25th month on NAS1-18889, Advanced Technology Composite Aircraft Structures, is summarized. The main objective of this program is to develop an integrated technology and demonstrate a confidence level that permits the cost- and weight-effective use of advanced composite materials in primary structures of future aircraft with the emphasis on pressurized fuselages. The period from 1-31 May 1991 is covered.

  5. Simulation of x-rays in refractive structure by the Monte Carlo method using the supercomputer SKIF

    International Nuclear Information System (INIS)

    Yaskevich, Yu.R.; Kravchenko, O.I.; Soroka, I.I.; Chembrovskij, A.G.; Kolesnik, A.S.; Serikova, N.V.; Petrov, P.V.; Kol'chevskij, N.N.

    2013-01-01

    Software 'Xray-SKIF' for the simulation of the X-rays in refractive structures by the Monte-Carlo method using the supercomputer SKIF BSU are developed. The program generates a large number of rays propagated from a source to the refractive structure. The ray trajectory under assumption of geometrical optics is calculated. Absorption is calculated for each ray inside of refractive structure. Dynamic arrays are used for results of calculation rays parameters, its restore the X-ray field distributions very fast at different position of detector. It was found that increasing the number of processors leads to proportional decreasing of calculation time: simulation of 10 8 X-rays using supercomputer with the number of processors from 1 to 30 run-times equal 3 hours and 6 minutes, respectively. 10 9 X-rays are calculated by software 'Xray-SKIF' which allows to reconstruct the X-ray field after refractive structure with a special resolution of 1 micron. (authors)

  6. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; D' Azevedo, Eduardo [ORNL; Philip, Bobby [ORNL; Worley, Patrick H [ORNL

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  7. Plasma turbulence calculations on supercomputers

    International Nuclear Information System (INIS)

    Carreras, B.A.; Charlton, L.A.; Dominguez, N.; Drake, J.B.; Garcia, L.; Leboeuf, J.N.; Lee, D.K.; Lynch, V.E.; Sidikman, K.

    1991-01-01

    Although the single-particle picture of magnetic confinement is helpful in understanding some basic physics of plasma confinement, it does not give a full description. Collective effects dominate plasma behavior. Any analysis of plasma confinement requires a self-consistent treatment of the particles and fields. The general picture is further complicated because the plasma, in general, is turbulent. The study of fluid turbulence is a rather complex field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples back to the fluid. Since the fluid is not a perfect conductor, this turbulence can lead to changes in the topology of the magnetic field structure, causing the magnetic field lines to wander radially. Because the plasma fluid flows along field lines, they carry the particles with them, and this enhances the losses caused by collisions. The changes in topology are critical for the plasma confinement. The study of plasma turbulence and the concomitant transport is a challenging problem. Because of the importance of solving the plasma turbulence problem for controlled thermonuclear research, the high complexity of the problem, and the necessity of attacking the problem with supercomputers, the study of plasma turbulence in magnetic confinement devices is a Grand Challenge problem

  8. Annabela Rita, Fernando Cristóvão (e ds., Daniela Marcheschi (Prefácio. Fabricar a Inovação – O Processo Criativo em Questão nas Ciências, nas Letras e nas Artes

    Directory of Open Access Journals (Sweden)

    Luísa Marinho Antunes

    2017-06-01

    Full Text Available Recensione del volume Annabela Rita e Fernando Cristóvão (eds. Fabricar a Inovação – O Processo Criativo em Questão nas Ciências, nas Letras e nas Artes, coord. Prefácio Daniela Marcheschi. Lisboa: Gradiva, 2016. Stampa (pp.396 seguita dalla versione italiana della Prefazione.

  9. Reactive flow simulations in complex geometries with high-performance supercomputing

    International Nuclear Information System (INIS)

    Rehm, W.; Gerndt, M.; Jahn, W.; Vogelsang, R.; Binninger, B.; Herrmann, M.; Olivier, H.; Weber, M.

    2000-01-01

    In this paper, we report on a modern field code cluster consisting of state-of-the-art reactive Navier-Stokes- and reactive Euler solvers that has been developed on vector- and parallel supercomputers at the research center Juelich. This field code cluster is used for hydrogen safety analyses of technical systems, for example, in the field of nuclear reactor safety and conventional hydrogen demonstration plants with fuel cells. Emphasis is put on the assessment of combustion loads, which could result from slow, fast or rapid flames, including transition from deflagration to detonation. As a sample of proof tests, the special tools have been tested for specific tasks, based on the comparison of experimental and numerical results, which are in reasonable agreement. (author)

  10. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; Kumar, Jitendra [ORNL; Mills, Richard T. [Argonne National Laboratory; Hoffman, Forrest M. [ORNL; Sripathi, Vamsi [Intel Corporation; Hargrove, William Walter [United States Department of Agriculture (USDA), United States Forest Service (USFS)

    2017-09-01

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like the Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.

  11. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  12. Integration of Titan supercomputer at OLCF with ATLAS Production System

    Science.gov (United States)

    Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to

  13. Use of QUADRICS supercomputer as embedded simulator in emergency management systems

    International Nuclear Information System (INIS)

    Bove, R.; Di Costanzo, G.; Ziparo, A.

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system

  14. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Science.gov (United States)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  15. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Directory of Open Access Journals (Sweden)

    DeTar Carleton

    2018-01-01

    Full Text Available With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  16. SUPERCOMPUTER SIMULATION OF CRITICAL PHENOMENA IN COMPLEX SOCIAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Petrus M.A. Sloot

    2014-09-01

    Full Text Available The paper describes a problem of computer simulation of critical phenomena in complex social systems on a petascale computing systems in frames of complex networks approach. The three-layer system of nested models of complex networks is proposed including aggregated analytical model to identify critical phenomena, detailed model of individualized network dynamics and model to adjust a topological structure of a complex network. The scalable parallel algorithm covering all layers of complex networks simulation is proposed. Performance of the algorithm is studied on different supercomputing systems. The issues of software and information infrastructure of complex networks simulation are discussed including organization of distributed calculations, crawling the data in social networks and results visualization. The applications of developed methods and technologies are considered including simulation of criminal networks disruption, fast rumors spreading in social networks, evolution of financial networks and epidemics spreading.

  17. Lectures in Supercomputational Neurosciences Dynamics in Complex Brain Networks

    CERN Document Server

    Graben, Peter beim; Thiel, Marco; Kurths, Jürgen

    2008-01-01

    Computational Neuroscience is a burgeoning field of research where only the combined effort of neuroscientists, biologists, psychologists, physicists, mathematicians, computer scientists, engineers and other specialists, e.g. from linguistics and medicine, seem to be able to expand the limits of our knowledge. The present volume is an introduction, largely from the physicists' perspective, to the subject matter with in-depth contributions by system neuroscientists. A conceptual model for complex networks of neurons is introduced that incorporates many important features of the real brain, such as various types of neurons, various brain areas, inhibitory and excitatory coupling and the plasticity of the network. The computational implementation on supercomputers, which is introduced and discussed in detail in this book, will enable the readers to modify and adapt the algortihm for their own research. Worked-out examples of applications are presented for networks of Morris-Lecar neurons to model the cortical co...

  18. Symbolic simulation of engineering systems on a supercomputer

    International Nuclear Information System (INIS)

    Ragheb, M.; Gvillo, D.; Makowitz, H.

    1986-01-01

    Model-Based Production-Rule systems for analysis are developed for the symbolic simulation of Complex Engineering systems on a CRAY X-MP Supercomputer. The Fault-Tree and Event-Tree Analysis methodologies from Systems-Analysis are used for problem representation and are coupled to the Rule-Based System Paradigm from Knowledge Engineering to provide modelling of engineering devices. Modelling is based on knowledge of the structure and function of the device rather than on human expertise alone. To implement the methodology, we developed a production-Rule Analysis System that uses both backward-chaining and forward-chaining: HAL-1986. The inference engine uses an Induction-Deduction-Oriented antecedent-consequent logic and is programmed in Portable Standard Lisp (PSL). The inference engine is general and can accommodate general modifications and additions to the knowledge base. The methodologies used will be demonstrated using a model for the identification of faults, and subsequent recovery from abnormal situations in Nuclear Reactor Safety Analysis. The use of the exposed methodologies for the prognostication of future device responses under operational and accident conditions using coupled symbolic and procedural programming is discussed

  19. High electron mobility in Ga(In)NAs films grown by molecular beam epitaxy

    International Nuclear Information System (INIS)

    Miyashita, Naoya; Ahsan, Nazmul; Monirul Islam, Muhammad; Okada, Yoshitaka; Inagaki, Makoto; Yamaguchi, Masafumi

    2012-01-01

    We report the highest mobility values above 2000 cm 2 /Vs in Si doped GaNAs film grown by molecular beam epitaxy. To understand the feature of the origin which limits the electron mobility in GaNAs, temperature dependences of mobility were measured for high mobility GaNAs and referential low mobility GaInNAs. Temperature dependent mobility for high mobility GaNAs is similar to the GaAs case, while that for low mobility GaInNAs shows large decrease in lower temperature region. The electron mobility of high quality GaNAs can be explained by intrinsic limiting factor of random alloy scattering and extrinsic factor of ionized impurity scattering.

  20. Interacciones de las proteínas disulfuro isomerasa y de choque térmico Hsc70 con proteínas estructurales recombinantes purificadas de rotavirus

    Directory of Open Access Journals (Sweden)

    Luz Y. Moreno

    2016-01-01

    Full Text Available Introducción. La entrada de rotavirus a las células parece estar mediado por interacciones secuenciales entre las proteínas estructurales virales y algunas moléculas de la superficie celular. Sin embargo, los mecanismos por los cuales el rotavirus infecta la célula diana aún no se comprenden bien. Existe alguna evidencia que muestra que las proteínas estructurales de rotavirus VP5* y VP8* interactúan con algunas moléculas de la superficie celular. La disponibilidad de las proteínas estructurales de rotavirus recombinantes en cantidad suficiente se ha convertido en un aspecto importante para la identificación de las interacciones específicas de los receptores virus-célula durante los eventos tempranos del proceso infeccioso. Objetivo. El propósito del presente trabajo es realizar un análisis de las interacciones entre las proteínas estructurales de rotavirus recombinante VP5*, VP8* y VP6, y las proteínas celulares Hsc70 y PDI utilizando sus versiones recombinantes purificadas. Materiales y métodos. Las proteínas recombinantes de rotavirus VP5* y VP8* y las proteínas recombinantes celulares Hsc70 y PDI se expresaron en E. coli BL21 (DE3, mientras que VP6 se expresó en células MA104 con virus vaccinia recombinante transfectada. La interacción entre el rotavirus y las proteínas celulares se estudió mediante ELISA, co-inmunoprecipitación y SDS-PAGE/ Western. Resultados. Las condiciones óptimas para la expresión de proteínas recombinantes se determinaron y se generaron anticuerpos contra ellas. Los resultados sugirieron que las proteínas virales rVP5* y rVP6 interactúan con Hsc70 y PDI in vitro. También se encontró que éstas proteínas virales recombinantes interactúan con Hsc70 en las balsas lipídicas (“Rafts” en un cultivo celular. El tratamiento de las células, ya sea con DLP o rVP6 produjo significativamente la inhibición de la infección por rotavirus. Conclusión. Los resultados permiten concluir que r

  1. Micro-mechanical Simulations of Soils using Massively Parallel Supercomputers

    Directory of Open Access Journals (Sweden)

    David W. Washington

    2004-06-01

    Full Text Available In this research a computer program, Trubal version 1.51, based on the Discrete Element Method was converted to run on a Connection Machine (CM-5,a massively parallel supercomputer with 512 nodes, to expedite the computational times of simulating Geotechnical boundary value problems. The dynamic memory algorithm in Trubal program did not perform efficiently in CM-2 machine with the Single Instruction Multiple Data (SIMD architecture. This was due to the communication overhead involving global array reductions, global array broadcast and random data movement. Therefore, a dynamic memory algorithm in Trubal program was converted to a static memory arrangement and Trubal program was successfully converted to run on CM-5 machines. The converted program was called "TRUBAL for Parallel Machines (TPM." Simulating two physical triaxial experiments and comparing simulation results with Trubal simulations validated the TPM program. With a 512 nodes CM-5 machine TPM produced a nine-fold speedup demonstrating the inherent parallelism within algorithms based on the Discrete Element Method.

  2. Notas sobre o fantasma nas toxicomanias

    Directory of Open Access Journals (Sweden)

    Walter Firmo de Oliveira Cruz

    Full Text Available O presente artigo foi apresentado na Jornada Clínica da Associação Psicanalítica de Porto Alegre - "A direção da cura nas toxicomanias: o sujeito em questão em outubro de 2003. Através da discussão de um caso clínico, busca evidenciar a importância da relação existente entre a fantasmática do sujeito e a escolha do objeto nas toxicomanias. Aborda ainda a toxicomania como sintoma da contemporaneidade, bem como traços da estética que a compõe.

  3. Large scale simulations of lattice QCD thermodynamics on Columbia Parallel Supercomputers

    International Nuclear Information System (INIS)

    Ohta, Shigemi

    1989-01-01

    The Columbia Parallel Supercomputer project aims at the construction of a parallel processing, multi-gigaflop computer optimized for numerical simulations of lattice QCD. The project has three stages; 16-node, 1/4GF machine completed in April 1985, 64-node, 1GF machine completed in August 1987, and 256-node, 16GF machine now under construction. The machines all share a common architecture; a two dimensional torus formed from a rectangular array of N 1 x N 2 independent and identical processors. A processor is capable of operating in a multi-instruction multi-data mode, except for periods of synchronous interprocessor communication with its four nearest neighbors. Here the thermodynamics simulations on the two working machines are reported. (orig./HSI)

  4. Advanced Interval Management: A Benefit Analysis

    Science.gov (United States)

    Timer, Sebastian; Peters, Mark

    2016-01-01

    This document is the final report for the NASA Langley Research Center (LaRC)- sponsored task order 'Possible Benefits for Advanced Interval Management Operations.' Under this research project, Architecture Technology Corporation performed an analysis to determine the maximum potential benefit to be gained if specific Advanced Interval Management (AIM) operations were implemented in the National Airspace System (NAS). The motivation for this research is to guide NASA decision-making on which Interval Management (IM) applications offer the most potential benefit and warrant further research.

  5. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, Clyde [Northern Illinois Univ., DeKalb, IL (United States); Karonis, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Lurio, Laurence [Northern Illinois Univ., DeKalb, IL (United States); Piot, Philippe [Northern Illinois Univ., DeKalb, IL (United States); Xiao, Zhili [Northern Illinois Univ., DeKalb, IL (United States); Glatz, Andreas [Northern Illinois Univ., DeKalb, IL (United States); Pohlman, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Hou, Minmei [Northern Illinois Univ., DeKalb, IL (United States); Demir, Veysel [Northern Illinois Univ., DeKalb, IL (United States); Song, Jie [Northern Illinois Univ., DeKalb, IL (United States); Duffin, Kirk [Northern Illinois Univ., DeKalb, IL (United States); Johns, Mitrick [Northern Illinois Univ., DeKalb, IL (United States); Sims, Thomas [Northern Illinois Univ., DeKalb, IL (United States); Yin, Yanbin [Northern Illinois Univ., DeKalb, IL (United States)

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  6. A SMART NAS Toolkit for Optimality Metrics Overlay, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation proposed is a plug-and-play module for NASA's proposed SMART NAS (Shadow Mode Assessment using Realistic Technologies for the NAS) system that...

  7. High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers

    Science.gov (United States)

    Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.

    2017-12-01

    The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and

  8. De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H

    2006-09-04

    We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework, high-end chemically reactive and non-reactive molecular dynamics (MD) simulations explore a wide solution space to discover microscopic mechanisms that govern macroscopic material properties, into which highly accurate quantum mechanical (QM) simulations are embedded to validate the discovered mechanisms and quantify the uncertainty of the solution. The framework includes an embedded divide-and-conquer (EDC) algorithmic framework for the design of linear-scaling simulation algorithms with minimal bandwidth complexity and tight error control. The EDC framework also enables adaptive hierarchical simulation with automated model transitioning assisted by graph-based event tracking. A tunable hierarchical cellular decomposition parallelization framework then maps the O(N) EDC algorithms onto Petaflops computers, while achieving performance tunability through a hierarchy of parameterized cell data/computation structures, as well as its implementation using hybrid Grid remote procedure call + message passing + threads programming. High-end computing platforms such as IBM BlueGene/L, SGI Altix 3000 and the NSF TeraGrid provide an excellent test grounds for the framework. On these platforms, we have achieved unprecedented scales of quantum-mechanically accurate and well validated, chemically reactive atomistic simulations--1.06 billion-atom fast reactive force-field MD and 11.8 million-atom (1.04 trillion grid points) quantum-mechanical MD in the framework of the EDC density functional theory on adaptive multigrids--in addition to 134 billion-atom non-reactive space-time multiresolution MD, with the parallel efficiency as high as 0.998 on 65,536 dual-processor BlueGene/L nodes. We have also achieved an automated execution of hierarchical QM

  9. LAS PROTEÍNAS DESORDENADAS Y SU FUNCIÓN: UNA NUEVA FORMA DE VER LA ESTRUCTURA DE LAS PROTEÍNAS Y LA RESPUESTA DE LAS PLANTAS AL ESTRÉS

    OpenAIRE

    César Luis Cuevas-Velázquez; Alejandra A. Covarrubias-Robles

    2011-01-01

    El dogma que relaciona la función de una proteína con una estructura tridimensional definida ha sido desafiado durante los últimos años por el descubrimiento y caracterización de las proteínas conocidas como proteínas no estructuradas o desordenadas. Estas proteínas poseen una elevada flexibilidad estructural la cual les permite adoptar estructuras diferentes y, por tanto, reconocer ligandos diversos conservando la especificidad en el reconocimiento de los mismos. A las proteínas de este tipo...

  10. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Directory of Open Access Journals (Sweden)

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  11. Computational mechanics - Advances and trends; Proceedings of the Session - Future directions of Computational Mechanics of the ASME Winter Annual Meeting, Anaheim, CA, Dec. 7-12, 1986

    Science.gov (United States)

    Noor, Ahmed K. (Editor)

    1986-01-01

    The papers contained in this volume provide an overview of the advances made in a number of aspects of computational mechanics, identify some of the anticipated industry needs in this area, discuss the opportunities provided by new hardware and parallel algorithms, and outline some of the current government programs in computational mechanics. Papers are included on advances and trends in parallel algorithms, supercomputers for engineering analysis, material modeling in nonlinear finite-element analysis, the Navier-Stokes computer, and future finite-element software systems.

  12. Watson will see you now: a supercomputer to help clinicians make informed treatment decisions.

    Science.gov (United States)

    Doyle-Lindrud, Susan

    2015-02-01

    IBM has collaborated with several cancer care providers to develop and train the IBM supercomputer Watson to help clinicians make informed treatment decisions. When a patient is seen in clinic, the oncologist can input all of the clinical information into the computer system. Watson will then review all of the data and recommend treatment options based on the latest evidence and guidelines. Once the oncologist makes the treatment decision, this information can be sent directly to the insurance company for approval. Watson has the ability to standardize care and accelerate the approval process, a benefit to the healthcare provider and the patient.

  13. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    Science.gov (United States)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  14. Adventures in supercomputing: An innovative program for high school teachers

    Energy Technology Data Exchange (ETDEWEB)

    Oliver, C.E.; Hicks, H.R.; Summers, B.G. [Oak Ridge National Lab., TN (United States); Staten, D.G. [Wartburg Central High School, TN (United States)

    1994-12-31

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology. Adventures in Supercomputing (AiS), sponsored by the U.S. Department of Energy (DOE), is such a program. It is a program for high school teachers that changes the teacher paradigm from a teacher-directed approach of teaching to a student-centered approach. {open_quotes}A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode{close_quotes}. Not only is the process of teaching changed, but the cross-curricula integration within the AiS materials is remarkable. Written from a teacher`s perspective, this paper will describe the AiS program and its effects on teachers and students, primarily at Wartburg Central High School, in Wartburg, Tennessee. The AiS program in Tennessee is sponsored by Oak Ridge National Laboratory (ORNL).

  15. Re-inventing electromagnetics - Supercomputing solution of Maxwell's equations via direct time integration on space grids

    International Nuclear Information System (INIS)

    Taflove, A.

    1992-01-01

    This paper summarizes the present state and future directions of applying finite-difference and finite-volume time-domain techniques for Maxwell's equations on supercomputers to model complex electromagnetic wave interactions with structures. Applications so far have been dominated by radar cross section technology, but by no means are limited to this area. In fact, the gains we have made place us on the threshold of being able to make tremendous contributions to non-defense electronics and optical technology. Some of the most interesting research in these commercial areas is summarized. 47 refs

  16. First-principle natural band alignment of GaN / dilute-As GaNAs alloy

    Directory of Open Access Journals (Sweden)

    Chee-Keong Tan

    2015-01-01

    Full Text Available Density functional theory (DFT calculations with the local density approximation (LDA functional are employed to investigate the band alignment of dilute-As GaNAs alloys with respect to the GaN alloy. Conduction and valence band positions of dilute-As GaNAs alloy with respect to the GaN alloy on an absolute energy scale are determined from the combination of bulk and surface DFT calculations. The resulting GaN / GaNAs conduction to valence band offset ratio is found as approximately 5:95. Our theoretical finding is in good agreement with experimental observation, indicating the upward movements of valence band at low-As content dilute-As GaNAs are mainly responsible for the drastic reduction of the GaN energy band gap. In addition, type-I band alignment of GaN / GaNAs is suggested as a reasonable approach for future device implementation with dilute-As GaNAs quantum well, and possible type-II quantum well active region can be formed by using InGaN / dilute-As GaNAs heterostructure.

  17. Auto-Suggest Capability via Machine Learning in SMART NAS, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We build machine learning capabilities that enables the Shadow Mode Assessment using Realistic Technologies for the NAS (SMART NAS) system to synthesize, optimize,...

  18. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    Science.gov (United States)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  19. Advances in petascale kinetic plasma simulation with VPIC and Roadrunner

    Energy Technology Data Exchange (ETDEWEB)

    Bowers, Kevin J [Los Alamos National Laboratory; Albright, Brian J [Los Alamos National Laboratory; Yin, Lin [Los Alamos National Laboratory; Daughton, William S [Los Alamos National Laboratory; Roytershteyn, Vadim [Los Alamos National Laboratory; Kwan, Thomas J T [Los Alamos National Laboratory

    2009-01-01

    VPIC, a first-principles 3d electromagnetic charge-conserving relativistic kinetic particle-in-cell (PIC) code, was recently adapted to run on Los Alamos's Roadrunner, the first supercomputer to break a petaflop (10{sup 15} floating point operations per second) in the TOP500 supercomputer performance rankings. They give a brief overview of the modeling capabilities and optimization techniques used in VPIC and the computational characteristics of petascale supercomputers like Roadrunner. They then discuss three applications enabled by VPIC's unprecedented performance on Roadrunner: modeling laser plasma interaction in upcoming inertial confinement fusion experiments at the National Ignition Facility (NIF), modeling short pulse laser GeV ion acceleration and modeling reconnection in magnetic confinement fusion experiments.

  20. Research center Juelich to install Germany's most powerful supercomputer new IBM System for science and research will achieve 5.8 trillion computations per second

    CERN Multimedia

    2002-01-01

    "The Research Center Juelich, Germany, and IBM today announced that they have signed a contract for the delivery and installation of a new IBM supercomputer at the Central Institute for Applied Mathematics" (1/2 page).

  1. Regulatory perspective on NAS recommendations for Yucca Mountain standards

    International Nuclear Information System (INIS)

    Brocoum, S.J.; Nesbit, S.P.; Duguid, J.A.; Lugo, M.A.; Krishna, P.M.

    1996-01-01

    This paper provides a regulatory perspective from the viewpoint of the potential licensee, the US Department of Energy (DOE), on the National Academy of Sciences (NAS) report on Yucca Mountain standards published in August 1995. The DOE agrees with some aspects of the NAS report; however, the DOE has serious concerns with the ability to implement some of the recommendations in a reasonable manner

  2. UAS-NAS Flight Test Series 3: Test Environment Report

    Science.gov (United States)

    Hoang, Ty; Murphy, Jim; Otto, Neil

    2016-01-01

    The desire and ability to fly Unmanned Aircraft Systems (UAS) in the National Airspace System (NAS) is of increasing urgency. The application of unmanned aircraft to perform national security, defense, scientific, and emergency management are driving the critical need for less restrictive access by UAS to the NAS. UAS represent a new capability that will provide a variety of services in the government (public) and commercial (civil) aviation sectors. The growth of this potential industry has not yet been realized due to the lack of a common understanding of what is required to safely operate UAS in the NAS. NASA's UAS Integration in the NAS Project is conducting research in the areas of Separation Assurance/Sense and Avoid Interoperability (SSI), Human Systems Integration (HSI), and Communications (Comm), and Certification to support reducing the barriers of UAS access to the NAS. This research is broken into two research themes namely, UAS Integration and Test Infrastructure. UAS Integration focuses on airspace integration procedures and performance standards to enable UAS integration in the air transportation system, covering Detect and Avoid (DAA) performance standards, command and control performance standards, and human systems integration. The focus of Test Infrastructure is to enable development and validation of airspace integration procedures and performance standards, including integrated test and evaluation. In support of the integrated test and evaluation efforts, the Project will develop an adaptable, scalable, and schedulable relevant test environment capable of evaluating concepts and technologies for unmanned aircraft systems to safely operate in the NAS. To accomplish this task, the Project is conducting a series of human-in-the-loop (HITL) and flight test activities that integrate key concepts, technologies and/or procedures in a relevant air traffic environment. Each of the integrated events will build on the technical achievements, fidelity, and

  3. MEGADOCK 4.0: an ultra-high-performance protein-protein docking software for heterogeneous supercomputers.

    Science.gov (United States)

    Ohue, Masahito; Shimoda, Takehiro; Suzuki, Shuji; Matsuzaki, Yuri; Ishida, Takashi; Akiyama, Yutaka

    2014-11-15

    The application of protein-protein docking in large-scale interactome analysis is a major challenge in structural bioinformatics and requires huge computing resources. In this work, we present MEGADOCK 4.0, an FFT-based docking software that makes extensive use of recent heterogeneous supercomputers and shows powerful, scalable performance of >97% strong scaling. MEGADOCK 4.0 is written in C++ with OpenMPI and NVIDIA CUDA 5.0 (or later) and is freely available to all academic and non-profit users at: http://www.bi.cs.titech.ac.jp/megadock. akiyama@cs.titech.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  4. Computational chemistry

    Science.gov (United States)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  5. LAS PROTEÍNAS DESORDENADAS Y SU FUNCIÓN: UNA NUEVA FORMA DE VER LA ESTRUCTURA DE LAS PROTEÍNAS Y LA RESPUESTA DE LAS PLANTAS AL ESTRÉS

    Directory of Open Access Journals (Sweden)

    César Luis Cuevas-Velázquez

    2011-01-01

    Full Text Available El dogma que relaciona la función de una proteína con una estructura tridimensional definida ha sido desafiado durante los últimos años por el descubrimiento y caracterización de las proteínas conocidas como proteínas no estructuradas o desordenadas. Estas proteínas poseen una elevada flexibilidad estructural la cual les permite adoptar estructuras diferentes y, por tanto, reconocer ligandos diversos conservando la especificidad en el reconocimiento de los mismos. A las proteínas de este tipo, altamente hidrofílicas y que se acumulan ante condiciones de déficit hídrico (sequía, salinidad, congelamiento se les ha denominado hidrofilinas. En plantas, las hidrofilinas mejor caracterizadas son las proteínas LEA (del inglés Late Embryogenesis Abundant que se acumulan abundantemente en la semilla seca y en tejidos vegetativos cuando las plantas se exponen a condiciones de limitación de agua. Evidencia reciente ha demostrado que las proteínas LEA se requieren para que las plantas toleren y se adapten a condiciones de baja disponibilidad de agua. Esta revisión describe los datos más relevantes que asocian las características fisicoquímicas de estas proteínas con su flexibilidad estructural y cómo se afecta ésta por las condiciones ambientales; así como, aquéllos relacionados con sus posibles funciones en la célula vegetal ante situaciones de limitación de agua.

  6. Development of an advanced fluid-dynamic analysis code: α-flow

    International Nuclear Information System (INIS)

    Akiyama, Mamoru

    1990-01-01

    A Project for development of large scale three-dimensional fluid-dynamic analysis code, α-FLOW, coping with the recent advancement of supercomputers and workstations, has been in progress. This project is called the α-Project, which has been promoted by the Association for Large Scale Fluid Dynamics Analysis Code comprising private companies and research institutions such as universities. The developmental period for the α-FLOW is four years, March 1989 to March 1992. To date, the major portions of basic design and program preparation have been completed and the project is in the stage of testing each module. In this paper, the present status of the α-Project, design policy and outline of α-FLOW are described. (author)

  7. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry

    2017-02-27

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey\\'s acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less

  8. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry; Etienne, Vincent; Gashawbeza, Ewenet; Curiel, Emesto Sandoval; Khan, Azizur; Feki, Saber; Kortas, Samuel

    2017-01-01

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey's acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less than

  9. UAS-NAS Integrated Human in the Loop: Test Environment Report

    Science.gov (United States)

    Murphy, Jim; Otto, Neil; Jovic, Srba

    2015-01-01

    The desire and ability to fly Unmanned Aircraft Systems (UAS) in the National Airspace System (NAS) is of increasing urgency. The application of unmanned aircraft to perform national security, defense, scientific, and emergency management are driving the critical need for less restrictive access by UAS to the NAS. UAS represent a new capability that will provide a variety of services in the government (public) and commercial (civil) aviation sectors. The growth of this potential industry has not yet been realized due to the lack of a common understanding of what is required to safely operate UAS in the NAS. NASA's UAS Integration in the NAS Project is conducting research in the areas of Separation Assurance/Sense and Avoid Interoperability (SSI), Human Systems Integration (HSI), and Communication to support reducing the barriers of UAS access to the NAS. This research was broken into two research themes namely, UAS Integration and Test Infrastructure. UAS Integration focuses on airspace integration procedures and performance standards to enable UAS integration in the air transportation system, covering Sense and Avoid (SAA) performance standards, command and control performance standards, and human systems integration. The focus of the Test Infrastructure theme was to enable development and validation of airspace integration procedures and performance standards, including the execution of integrated test and evaluation. In support of the integrated test and evaluation efforts, the Project developed an adaptable, scalable, and schedulable relevant test environment incorporating live, virtual, and constructive elements capable of validating concepts and technologies for unmanned aircraft systems to safely operate in the NAS. To accomplish this task, the Project planned to conduct three integrated events: a Human-in-the-Loop simulation and two Flight Test series that integrated key concepts, technologies and/or procedures in a relevant air traffic environment. Each of

  10. Computational fluid dynamics: complex flows requiring supercomputers. January 1975-July 1988 (Citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Report for January 1975-July 1988

    International Nuclear Information System (INIS)

    1988-08-01

    This bibliography contains citations concerning computational fluid dynamics (CFD), a new method in computational science to perform complex flow simulations in three dimensions. Applications include aerodynamic design and analysis for aircraft, rockets, and missiles, and automobiles; heat-transfer studies; and combustion processes. Included are references to supercomputers, array processors, and parallel processors where needed for complete, integrated design. Also included are software packages and grid-generation techniques required to apply CFD numerical solutions. Numerical methods for fluid dynamics, not requiring supercomputers, are found in a separate published search. (Contains 83 citations fully indexed and including a title list.)

  11. Low temperature grown GaNAsSb: A promising material for photoconductive switch application

    Energy Technology Data Exchange (ETDEWEB)

    Tan, K. H.; Yoon, S. F.; Wicaksono, S.; Loke, W. K.; Li, D. S. [School of Electrical and Electronic Engineering, Nanyang Technological University, Nanyang Avenue, Singapore 639798 (Singapore); Saadsaoud, N.; Tripon-Canseliet, C. [Laboratoire d' Electronique et Electromagnétisme, Pierre and Marie Curie University, 4 Place Jussieu, 75005 Paris (France); Lampin, J. F.; Decoster, D. [Institute of Electronics, Microelectronics and Nanotechnology (IEMN), UMR CNRS 8520, Universite des Sciences et Technologies de Lille, BP 60069, 59652 Villeneuve d' Ascq Cedex (France); Chazelas, J. [Thales Airborne Systems, 2 Avenue Gay Lussac, 78852 Elancourt (France)

    2013-09-09

    We report a photoconductive switch using low temperature grown GaNAsSb as the active material. The GaNAsSb layer was grown at 200 °C by molecular beam epitaxy in conjunction with a radio frequency plasma-assisted nitrogen source and a valved antimony cracker source. The low temperature growth of the GaNAsSb layer increased the dark resistivity of the switch and shortened the carrier lifetime. The switch exhibited a dark resistivity of 10{sup 7} Ω cm, a photo-absorption of up to 2.1 μm, and a carrier lifetime of ∼1.3 ps. These results strongly support the suitability of low temperature grown GaNAsSb in the photoconductive switch application.

  12. UAS Integration in the NAS: Detect and Avoid

    Science.gov (United States)

    Shively, Jay

    2018-01-01

    This presentation will cover the structure of the unmanned aircraft systems (UAS) integration into the national airspace system (NAS) project (UAS-NAS Project). The talk also details the motivation of the project to help develop standards for a detect-and-avoid (DAA) system, which is required in order to comply with requirements in manned aviation to see-and-avoid other traffic so as to maintain well clear. The presentation covers accomplishments reached by the project in Phase 1 of the research, and touches on the work to be done in Phase 2. The discussion ends with examples of the display work developed as a result of the Phase 1 research.

  13. "Sempre tivemos mulheres nos cantos e nas cordas": uma pesquisa sobre o lugar feminino nas corporações musicais

    Directory of Open Access Journals (Sweden)

    Mayara Pacheco Coelho

    2014-04-01

    Full Text Available O presente artigo insere-se em projeto de pesquisa-intervenção sobre a música e suas articulações identitárias nas corporações musicais da região dos Campos das Vertentes, em especial São João del-Rei e cidades vizinhas. Nessa região, a música tem papel significativo na formação da identidade cultural dos cidadãos e na história dos municípios. O recorte atual apresenta uma investigação sobre determinações de gênero, visando conhecer como se dá a participação de musicistas nas bandas e orquestras da região. Para tanto, utilizou-se a análise arqueológica do discurso, a fim de contrapor falas de musicistas às falas de músicos das corporações e, também, às falas masculinas presentes na filosofia e ao discurso utópico sobre a mulher. Observou-se que as diferenças de gênero tradicionais conservam-se encobertas no cotidiano das corporações musicais. Entretanto, observou-se também que as musicistas começam a ser reconhecidas nas corporações e, sobretudo, reconhecem-se como capazes de, nelas, alçarem voos.

  14. Development of GaInNAsSb alloys: Growth, band structure, optical properties and applications

    International Nuclear Information System (INIS)

    Harris, James S. Jr.; Kudrawiec, R.; Yuen, H.B.; Bank, S.R.; Bae, H.P.; Wistey, M.A.; Jackrel, D.; Pickett, E.R.; Sarmiento, T.; Goddard, L.L.; Lordi, V.; Gugov, T.

    2007-01-01

    In the past few years, GaInNAsSb has been found to be a potentially superior material to both GaInNAs and InGaAsP for communications wavelength laser applications. It has been observed that due to the surfactant role of antimony during epitaxy, higher quality material can be grown over the entire 1.2-1.6 μm range on GaAs substrates. In addition, it has been discovered that antimony in GaInNAsSb also works as a constituent that significantly modifies the valence band. These findings motivated a systematic study of GaInNAsSb alloys with widely varying compositions. Our recent progress in growth and materials development of GaInNAsSb alloys and our fabrication of 1.5-1.6 μm lasers are discussed in this paper. We review our recent studies of the conduction band offset in (Ga,In) (N,As,Sb)/GaAs quantum wells and discuss the growth challenges of GaInNAsSb alloys. Finally, we report record setting long wavelength edge emitting lasers and the first monolithic VCSELs operating at 1.5 μm based on GaInNAsSb QWs grown on GaAs. Successful development of GaInNAsSb alloys for lasers has led to a much broader range of potential applications for this material including: solar cells, electroabsorption modulators, saturable absorbers and far infrared optoelectronic devices and these are also briefly discussed in this paper. (copyright 2007 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  15. Development of GaInNAsSb alloys: Growth, band structure, optical properties and applications

    Energy Technology Data Exchange (ETDEWEB)

    Harris, James S. Jr.; Kudrawiec, R.; Yuen, H.B.; Bank, S.R.; Bae, H.P.; Wistey, M.A.; Jackrel, D.; Pickett, E.R.; Sarmiento, T.; Goddard, L.L.; Lordi, V.; Gugov, T. [Solid State and Photonics Laboratory, Stanford University, CIS-X 328, Via Ortega, Stanford, California 94305-4075 (United States)

    2007-08-15

    In the past few years, GaInNAsSb has been found to be a potentially superior material to both GaInNAs and InGaAsP for communications wavelength laser applications. It has been observed that due to the surfactant role of antimony during epitaxy, higher quality material can be grown over the entire 1.2-1.6 {mu}m range on GaAs substrates. In addition, it has been discovered that antimony in GaInNAsSb also works as a constituent that significantly modifies the valence band. These findings motivated a systematic study of GaInNAsSb alloys with widely varying compositions. Our recent progress in growth and materials development of GaInNAsSb alloys and our fabrication of 1.5-1.6 {mu}m lasers are discussed in this paper. We review our recent studies of the conduction band offset in (Ga,In) (N,As,Sb)/GaAs quantum wells and discuss the growth challenges of GaInNAsSb alloys. Finally, we report record setting long wavelength edge emitting lasers and the first monolithic VCSELs operating at 1.5 {mu}m based on GaInNAsSb QWs grown on GaAs. Successful development of GaInNAsSb alloys for lasers has led to a much broader range of potential applications for this material including: solar cells, electroabsorption modulators, saturable absorbers and far infrared optoelectronic devices and these are also briefly discussed in this paper. (copyright 2007 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  16. [Methods in neonatal abstinence syndrome (NAS): results of a nationwide survey in Austria].

    Science.gov (United States)

    Bauchinger, S; Sapetschnig, I; Danda, M; Sommer, C; Resch, B; Urlesberger, B; Raith, W

    2015-08-01

    Neonatal abstinence syndrome (NAS) occurs in neonates whose mothers have taken addictive drugs or were under substitution therapy during pregnancy. Incidence numbers of NAS are on the rise globally, even in Austria NAS is not rare anymore. The aim of our survey was to reveal the status quo of dealing with NAS in Austria. A questionnaire was sent to 20 neonatology departments all over Austria, items included questions on scoring, therapy, breast-feeding and follow-up procedures. The response rate was 95%, of which 94.7% had written guidelines concerning NAS. The median number of children being treated per year for NAS was 4. Finnegan scoring system is used in 100% of the responding departments. Morphine is being used most often, in opiate abuse (100%) as well as in multiple substance abuse (44.4%). The most frequent forms of morphine preparation are morphine and diluted tincture of opium. Frequency as well as dosage of medication vary broadly. 61.1% of the departments supported breast-feeding, regulations concerned participation in a substitution programme and general contraindications (HIV, HCV, HBV). Our results revealed that there is a big west-east gradient in patients being treated per year. NAS is not a rare entity anymore in Austria (up to 50 cases per year in Vienna). Our survey showed that most neonatology departments in Austria treat their patients following written guidelines. Although all of them base these guidelines on international recommendations there is no national consensus. © Georg Thieme Verlag KG Stuttgart · New York.

  17. Car2x with software defined networks, network functions virtualization and supercomputers technical and scientific preparations for the Amsterdam Arena telecoms fieldlab

    NARCIS (Netherlands)

    Meijer R.J.; Cushing R.; De Laat C.; Jackson P.; Klous S.; Koning R.; Makkes M.X.; Meerwijk A.

    2015-01-01

    In the invited talk 'Car2x with SDN, NFV and supercomputers' we report about how our past work with SDN [1, 2] allows the design of a smart mobility fieldlab in the huge parking lot the Amsterdam Arena. We explain how we can engineer and test software that handle the complex conditions of the Car2X

  18. A user-friendly web portal for T-Coffee on supercomputers

    Directory of Open Access Journals (Sweden)

    Koetsier Jos

    2011-05-01

    Full Text Available Abstract Background Parallel T-Coffee (PTC was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  19. Estabilidad de emulsiones preparadas con proteínas de sueros de soja

    Directory of Open Access Journals (Sweden)

    Jorge Wagner

    2011-12-01

    Full Text Available Por precipitación con acetona en frio, se obtuvieron muestras de proteínas aisladas de dos sueros de soja, el suero SS proveniente de la obtención de aislados de soja y el suero de tofu ST. A partir del SS y del mismo suero previamente liofilizado y calentado (SSLC se obtuvieron las proteínas denominadas PSS y PSSLC, respectivamente; a partir de ST se preparó la muestra PST. El objetivo del trabajo fue analizar la estabilidad de emulsiones o/w preparadas con las proteínas de sueros de soja en forma comparativa con un aislado de soja nativo (ASN. Las emulsiones se prepararon por homogeneización de dispersiones proteicas (0,1–1,0 % p/v en buffer fosfato 10 mM pH 7 y aceite de girasol (Φmásico=0,33, empleando un Ultraturrax T-25. La estabilidad fue evaluada por medida del aceite separado, distribución de tamaño de partículas (por difracción láser y los grados de cremado y coalescencia evaluados a través de perfiles de BackScattering. Se observó que en todas las concentraciones ensayadas las emulsiones preparadas con proteínas aisladas (por precipitación con acetona en frío de suero de tofu tratado térmicamente (PST tenían una estabilidad comparable a la de emulsiones preparadas con ASN. Se halló una estabilidad menor en emulsiones con proteínas nativas de suero de soja (PSS obtenido en laboratorio no tratado térmicamente. Las proteínas obtenidas de este suero liofilizado y calentado (PSSLC exhibieron una mejor capacidad emulsionante. Los resultados mostraron que las proteínas de sueros de soja presentan buenas propiedades emulsionantes y estabilizantes dependientes del grado de desnaturalización y glicosilación alcanzado.

  20. Portable implementation model for CFD simulations. Application to hybrid CPU/GPU supercomputers

    Science.gov (United States)

    Oyarzun, Guillermo; Borrell, Ricard; Gorobets, Andrey; Oliva, Assensi

    2017-10-01

    Nowadays, high performance computing (HPC) systems experience a disruptive moment with a variety of novel architectures and frameworks, without any clarity of which one is going to prevail. In this context, the portability of codes across different architectures is of major importance. This paper presents a portable implementation model based on an algebraic operational approach for direct numerical simulation (DNS) and large eddy simulation (LES) of incompressible turbulent flows using unstructured hybrid meshes. The strategy proposed consists in representing the whole time-integration algorithm using only three basic algebraic operations: sparse matrix-vector product, a linear combination of vectors and dot product. The main idea is based on decomposing the nonlinear operators into a concatenation of two SpMV operations. This provides high modularity and portability. An exhaustive analysis of the proposed implementation for hybrid CPU/GPU supercomputers has been conducted with tests using up to 128 GPUs. The main objective consists in understanding the challenges of implementing CFD codes on new architectures.

  1. Proteínas: redefiniendo algunos conceptos

    Directory of Open Access Journals (Sweden)

    Juan Camilo Calderon Vélez

    2006-04-01

    Full Text Available El conocimiento sobre las estructuras primarias, secundarias y terciarias de las proteínas crece cada día; la terminología y su adecuado uso, incluso para los conocedores, pueden resultar confusos. Se propone en esta comunicación una forma sencilla y práctica de abordar el tema.

  2. Applications Performance on NAS Intel Paragon XP/S - 15#

    Science.gov (United States)

    Saini, Subhash; Simon, Horst D.; Copper, D. M. (Technical Monitor)

    1994-01-01

    The Numerical Aerodynamic Simulation (NAS) Systems Division received an Intel Touchstone Sigma prototype model Paragon XP/S- 15 in February, 1993. The i860 XP microprocessor with an integrated floating point unit and operating in dual -instruction mode gives peak performance of 75 million floating point operations (NIFLOPS) per second for 64 bit floating point arithmetic. It is used in the Paragon XP/S-15 which has been installed at NAS, NASA Ames Research Center. The NAS Paragon has 208 nodes and its peak performance is 15.6 GFLOPS. Here, we will report on early experience using the Paragon XP/S- 15. We have tested its performance using both kernels and applications of interest to NAS. We have measured the performance of BLAS 1, 2 and 3 both assembly-coded and Fortran coded on NAS Paragon XP/S- 15. Furthermore, we have investigated the performance of a single node one-dimensional FFT, a distributed two-dimensional FFT and a distributed three-dimensional FFT Finally, we measured the performance of NAS Parallel Benchmarks (NPB) on the Paragon and compare it with the performance obtained on other highly parallel machines, such as CM-5, CRAY T3D, IBM SP I, etc. In particular, we investigated the following issues, which can strongly affect the performance of the Paragon: a. Impact of the operating system: Intel currently uses as a default an operating system OSF/1 AD from the Open Software Foundation. The paging of Open Software Foundation (OSF) server at 22 MB to make more memory available for the application degrades the performance. We found that when the limit of 26 NIB per node out of 32 MB available is reached, the application is paged out of main memory using virtual memory. When the application starts paging, the performance is considerably reduced. We found that dynamic memory allocation can help applications performance under certain circumstances. b. Impact of data cache on the i860/XP: We measured the performance of the BLAS both assembly coded and Fortran

  3. Tetovēšanas tradīcija Ķīnas vēsturē

    OpenAIRE

    Zolotarjova, Anastasija

    2012-01-01

    Šī bakalaura darba nosaukums ir „Tetovēšanas tradīcija Ķīnas vēsturē”. Tetovēšanas tradīcijai ir sena vēsture. Ķīnas teritoriju apdzīvojošās tautas jau kopš senatnes izmantoja tetovēšanu dažādiem nolūkiem – kā aizsargu no ļaunajiem gariem, kā dekoru, kā soda veidu, kā atpazīšanas zīmi un zvēresta nodošanai. Izpētītas arī mūsdienu Ķīnas mazākumtautību tetovēšanas tradīcijas. Darba mērķis ir seno Ķīnas tautu un mūsdienu Ķīnas mazākumtautību tradīciju un tetovēšanas iemeslu izpēte, kā arī i...

  4. Invalidez por dor nas costas entre segurados da Previdência Social do Brasil

    Directory of Open Access Journals (Sweden)

    Ney Meziat Filho

    2011-06-01

    Full Text Available OBJETIVO: Descrever as aposentadorias por invalidez decorrente de dor nas costas. MÉTODOS: Estudo descritivo com dados do Sistema Único de Informações de Benefícios e dos Anuários Estatísticos da Previdência Social em 2007. A taxa de incidência de dor nas costas como causa das aposentadorias por invalidez foi calculada segundo as variáveis idade e sexo, nos estados. Os dias de trabalho perdidos por invalidez decorrente de dor nas costas foram calculados segundo atividade profissional. RESULTADOS: A dor nas costas idiopática foi a primeira causa de invalidez entre as aposentadorias previdenciárias e acidentárias. A maioria dos beneficiários residia em área urbana e era comerciário. A taxa de incidência de dor nas costas como causa das aposentadorias por invalidez no Brasil foi de 29,96 por 100.000 contribuintes. Esse valor foi mais elevado entre os homens e entre as pessoas mais velhas. Rondônia exibiu taxa quatro vezes superior ao esperado (RT = 4,05 e a segunda maior taxa, referente à Bahia, foi de aproximadamente duas vezes o esperado (RT = 2,07. Os comerciários foram responsáveis por 96,9% dos dias perdidos por invalidez. CONCLUSÕES: A dor nas costas foi uma importante causa de invalidez em 2007, sobretudo entre comerciários, com grandes diferenças entre os estados.

  5. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    International Nuclear Information System (INIS)

    Delbecq, J.M.; Banner, D.

    2003-01-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  6. O USO DO TWITTER NAS ELEIÇÕES PRESIDENCIAIS NO BRASIL EM 2010

    OpenAIRE

    MARCOS FRANCISCO SOARES FERREIRA

    2012-01-01

    O presente trabalho de dissertação de mestrado tem como objeto de estudo O uso do Twitter nas eleições presidenciais de 2010. Apresenta como objetivos principais: 1) Analisar o Twitter como ferramenta de comunicação nas eleições. 2) Identificar o uso do Twitter pelos três principais candidatos a presidência da república no Brasil nas eleições presidenciais em 2010. Trata-se de uma pesquisa exploratória, que sistematiza dados empíricos coletados diretamente do Twitter dos candidatos pesquisado...

  7. ASCI's Vision for supercomputing future

    International Nuclear Information System (INIS)

    Nowak, N.D.

    2003-01-01

    The full text of publication follows. Advanced Simulation and Computing (ASC, formerly Accelerated Strategic Computing Initiative [ASCI]) was established in 1995 to help Defense Programs shift from test-based confidence to simulation-based confidence. Specifically, ASC is a focused and balanced program that is accelerating the development of simulation capabilities needed to analyze and predict the performance, safety, and reliability of nuclear weapons and certify their functionality - far exceeding what might have been achieved in the absence of a focused initiative. To realize its vision, ASC is creating simulation and proto-typing capabilities, based on advanced weapon codes and high-performance computing

  8. Fiscal 2000 report on advanced parallelized compiler technology. Outlines; 2000 nendo advanced heiretsuka compiler gijutsu hokokusho (Gaiyo hen)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    Research and development was carried out concerning the automatic parallelized compiler technology which improves on the practical performance, cost/performance ratio, and ease of operation of the multiprocessor system now used for constructing supercomputers and expected to provide a fundamental architecture for microprocessors for the 21st century. Efforts were made to develop an automatic multigrain parallelization technology for extracting multigrain as parallelized from a program and for making full use of the same and a parallelizing tuning technology for accelerating parallelization by feeding back to the compiler the dynamic information and user knowledge to be acquired during execution. Moreover, a benchmark program was selected and studies were made to set execution rules and evaluation indexes for the establishment of technologies for subjectively evaluating the performance of parallelizing compilers for the existing commercial parallel processing computers, which was achieved through the implementation and evaluation of the 'Advanced parallelizing compiler technology research and development project.' (NEDO)

  9. Deficiencia combinada de proteínas c y s

    Directory of Open Access Journals (Sweden)

    Yaneth Zamora-González

    Full Text Available Las trombofilias son un grupo de enfermedades que favorecen la formación de trombosis, tanto arteriales como venosas, que han sido asociadas con diferentes complicaciones durante el embarazo, como: aborto recurrente, preclampsia, crecimiento intrauterino retardado y muerte fetal intraútero, entre otras. La deficiencia congénita o adquirida de proteínas de la coagulación, como las proteínas C y S, se asocia con eventos trombóticos antes de los 30 o 40 años. La trombosis venosa profunda es considerada la manifestación clínica más frecuente, aunque también puede verse asociada con enfermedad cerebro vascular, pérdidas recurrentes de embarazos y otros estados isquémicos. En la actualidad, las enfermedades trombóticas constituyen una de las primeras causas de fallecimiento en el mundo; la morbimortalidad anual por trombosis, ya sea arterial o venosa, es de aproximadamente dos millones de personas. Presentamos un caso con antecedentes de pérdidas recurrentes de embarazos y trombosis venosa profunda en miembros inferiores con deficiencia combinada de proteínas C y S.

  10. SOFTWARE FOR SUPERCOMPUTER SKIF “ProLit-lC” and “ProNRS-lC” FOR FOUNDRY AND METALLURGICAL PRODUCTIONS

    Directory of Open Access Journals (Sweden)

    A. N. Chichko

    2008-01-01

    Full Text Available The data of modeling on supercomputer system SKIF of technological process of  molds filling by means of computer system 'ProLIT-lc', and also data of modeling of the steel pouring process by means ofTroNRS-lc'are presented. The influence of number of  processors of  multinuclear computer system SKIF on acceleration and time of  modeling of technological processes, connected with production of castings and slugs, is shown.

  11. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  12. Japānas populārās kultūras ietekme uz jauniešiem

    OpenAIRE

    Leščenko, Jekaterina

    2016-01-01

    Šī darba nosaukums ir “Japānas populārās kultūras ietekme uz jauniešiem”. Ir zināms, ka pēdējos gados Japānas populāra kultūra iegūst arvien vairāk popularitātes visā pasaulē. Gandrīz katrā valstī var atrast kaut ko, kas ir saistīts ar Japānas populāro kultūru. Mūzika, drāmas, anime un manga ir bieži sastopami un zināmi Japānas populārās kultūras objekti. Tā kā Japānas populārā kultūra ir tik izplatīta, ir jābūt kādam popularitātes iemeslam. Pirmkārt, Japānas populārā kultūra ir pilnīgi atš...

  13. Automation of Data Traffic Control on DSM Architecture

    Science.gov (United States)

    Frumkin, Michael; Jin, Hao-Qiang; Yan, Jerry

    2001-01-01

    The design of distributed shared memory (DSM) computers liberates users from the duty to distribute data across processors and allows for the incremental development of parallel programs using, for example, OpenMP or Java threads. DSM architecture greatly simplifies the development of parallel programs having good performance on a few processors. However, to achieve a good program scalability on DSM computers requires that the user understand data flow in the application and use various techniques to avoid data traffic congestions. In this paper we discuss a number of such techniques, including data blocking, data placement, data transposition and page size control and evaluate their efficiency on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks. We also present a tool which automates the detection of constructs causing data congestions in Fortran array oriented codes and advises the user on code transformations for improving data traffic in the application.

  14. Upgrades to the Probabilistic NAS Platform Air Traffic Simulation Software

    Science.gov (United States)

    Hunter, George; Boisvert, Benjamin

    2013-01-01

    This document is the final report for the project entitled "Upgrades to the Probabilistic NAS Platform Air Traffic Simulation Software." This report consists of 17 sections which document the results of the several subtasks of this effort. The Probabilistic NAS Platform (PNP) is an air operations simulation platform developed and maintained by the Saab Sensis Corporation. The improvements made to the PNP simulation include the following: an airborne distributed separation assurance capability, a required time of arrival assignment and conformance capability, and a tactical and strategic weather avoidance capability.

  15. Neonatal Abstinence Syndrome (NAS) in Southwestern Border States: Examining Trends, Population Correlates, and Implications for Policy.

    Science.gov (United States)

    Hussaini, Khaleel S; Garcia Saavedra, Luigi F

    2018-03-23

    Introduction Neonatal abstinence syndrome (NAS) is withdrawal syndrome in newborns following birth and is primarily caused by maternal drug use during pregnancy. This study examines trends, population correlates, and policy implications of NAS in two Southwest border states. Materials and Methods A cross-sectional analysis of Hospital Inpatient Discharge Data (HIDD) was utilized to examine the incidence of NAS in the Southwest border states of Arizona (AZ) and New Mexico (NM). All inpatient hospital births in AZ and NM from January 1, 2008 through December 31, 2013 with ICD9-CM codes for NAS (779.5), cocaine (760.72), or narcotics (760.75) were extracted. Results During 2008-2013 there were 1472 NAS cases in AZ and 888 in NM. The overall NAS rate during this period was 2.83 per 1000 births (95% CI 2.68-2.97) in AZ and 5.31 (95% CI 4.96-5.66) in NM. NAS rates increased 157% in AZ and 174% in NM. NAS newborns were more likely to have low birth weight, have respiratory distress, more likely to have feeding difficulties, and more likely to be on state Medicaid insurance. AZ border region (border with Mexico) had NAS rates significantly higher than the state rate (4.06 per 1000 births [95% CI 3.68-4.44] vs. 2.83 [95% CI 2.68-2.97], respectively). In NM, the border region rate (2.09 per 1000 births [95% CI 1.48-2.69]) was significantly lower than the state rate (5.31 [95% CI 4.96-5.66]). Conclusions Despite a dramatic increase in the incidence of NAS in the U.S. and, in particular, the Southwest border states of AZ and NM, there is still scant research on the overall incidence of NAS, its assessment in the southwest border, and associated long-term outcomes. The Healthy Border (HB) 2020 binational initiative of the U.S.-Mexico Border Health Commission is an initiative that addresses several public health priorities that not only include chronic and degenerative diseases, infectious diseases, injury prevention, maternal and child health but also mental health and

  16. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    Science.gov (United States)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  17. Improved performance in GaInNAs solar cells by hydrogen passivation

    International Nuclear Information System (INIS)

    Fukuda, M.; Whiteside, V. R.; Keay, J. C.; Meleco, A.; Sellers, I. R.; Hossain, K.; Golding, T. D.; Leroux, M.; Al Khalfioui, M.

    2015-01-01

    The effect of UV-activated hydrogenation on the performance of GaInNAs solar cells is presented. A proof-of-principle investigation was performed on non-optimum GaInNAs cells, which allowed a clearer investigation of the role of passivation on the intrinsic nitrogen-related defects in these materials. Upon optimized hydrogenation of GaInNAs, a significant reduction in the presence of defect and impurity based luminescence is observed as compared to that of unpassivated reference material. This improvement in the optical properties is directly transferred to an improved performance in solar cell operation, with a more than two-fold improvement in the external quantum efficiency and short circuit current density upon hydrogenation. Temperature dependent photovoltaic measurements indicate a strong contribution of carrier localization and detrapping processes, with non-radiative processes dominating in the reference materials, and evidence for additional strong radiative losses in the hydrogenated solar cells

  18. Música: uma imgagem sonora nas comunidades eclesiais de base

    OpenAIRE

    Roberto Barroso da Rocha

    2012-01-01

    Esta dissertação tem o propósito de analisar a função social da música nas CEBs (Comunidades Eclesiais de Base), que possui bases bíblicas e está presente nos dias atuais. A primeira parte trata da função social da música na Bíblia chegando até os dias atuais com uma breve narrativa da história da música ocidental. A segunda parte aborda a música nas CEBs e a Teologia da Libertação como parte importante do contexto musical, onde os ideais da Teologia de Libertação são divulgados pela música; ...

  19. Operational implications and proposed infrastructure changes for NAS integration of remotely piloted aircraft (RPA)

    Science.gov (United States)

    2014-12-01

    The intent of this report is to provide (1) an initial assessment of National Airspace System (NAS) infrastructure affected by continuing development and deployment of unmanned aircraft systems into the NAS, and (2) a description of process challenge...

  20. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Helland, B. [Ames Lab., IA (United States); Summers, B.G. [Oak Ridge National Lab., TN (United States)

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. The data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.

  1. UAS in the NAS: Survey Responses by ATC, Manned Aircraft Pilots, and UAS Pilots

    Science.gov (United States)

    Comstock, James R., Jr.; McAdaragh, Raymon; Ghatas, Rania W.; Burdette, Daniel W.; Trujillo, Anna C.

    2014-01-01

    NASA currently is working with industry and the Federal Aviation Administration (FAA) to establish future requirements for Unmanned Aircraft Systems (UAS) flying in the National Airspace System (NAS). To work these issues NASA has established a multi-center "UAS Integration in the NAS" project. In order to establish Ground Control Station requirements for UAS, the perspective of each of the major players in NAS operations was desired. Three on-line surveys were administered that focused on Air Traffic Controllers (ATC), pilots of manned aircraft, and pilots of UAS. Follow-up telephone interviews were conducted with some survey respondents. The survey questions addressed UAS control, navigation, and communications from the perspective of small and large unmanned aircraft. Questions also addressed issues of UAS equipage, especially with regard to sense and avoid capabilities. From the civilian ATC and military ATC perspectives, of particular interest are how mixed operations (manned / UAS) have worked in the past and the role of aircraft equipage. Knowledge gained from this information is expected to assist the NASA UAS Integration in the NAS project in directing research foci thus assisting the FAA in the development of rules, regulations, and policies related to UAS in the NAS.

  2. The BlueGene/L Supercomputer and Quantum ChromoDynamics

    International Nuclear Information System (INIS)

    Vranas, P; Soltz, R

    2006-01-01

    In summary our update contains: (1) Perfect speedup sustaining 19.3% of peak for the Wilson D D-slash Dirac operator. (2) Measurements of the full Conjugate Gradient (CG) inverter that inverts the Dirac operator. The CG inverter contains two global sums over the entire machine. Nevertheless, our measurements retain perfect speedup scaling demonstrating the robustness of our methods. (3) We ran on the largest BG/L system, the LLNL 64 rack BG/L supercomputer, and obtained a sustained speed of 59.1 TFlops. Furthermore, the speedup scaling of the Dirac operator and of the CG inverter are perfect all the way up to the full size of the machine, 131,072 cores (please see Figure II). The local lattice is rather small (4 x 4 x 4 x 16) while the total lattice has been a lattice QCD vision for thermodynamic studies (a total of 128 x 128 x 256 x 32 lattice sites). This speed is about five times larger compared to the speed we quoted in our submission. As we have pointed out in our paper QCD is notoriously sensitive to network and memory latencies, has a relatively high communication to computation ratio which can not be overlapped in BGL in virtual node mode, and as an application is in a class of its own. The above results are thrilling to us and a 30 year long dream for lattice QCD

  3. Modeling radiative transport in ICF plasmas on an IBM SP2 supercomputer

    International Nuclear Information System (INIS)

    Johansen, J.A.; MacFarlane, J.J.; Moses, G.A.

    1995-01-01

    At the University of Wisconsin-Madison the authors have integrated a collisional-radiative-equilibrium model into their CONRAD radiation-hydrodynamics code. This integrated package allows them to accurately simulate the transport processes involved in ICF plasmas; including the important effects of self-absorption of line-radiation. However, as they increase the amount of atomic structure utilized in their transport models, the computational demands increase nonlinearly. In an attempt to meet this increased computational demand, they have recently embarked on a mission to parallelize the CONRAD program. The parallel CONRAD development is being performed on an IBM SP2 supercomputer. The parallelism is based on a message passing paradigm, and is being implemented using PVM. At the present time they have determined that approximately 70% of the sequential program can be executed in parallel. Accordingly, they expect that the parallel version will yield a speedup on the order of three times that of the sequential version. This translates into only 10 hours of execution time for the parallel version, whereas the sequential version required 30 hours

  4. Elementos de Análisis Cualitativo y Cuantitativo en Proteínas del Gluten de Trigo

    OpenAIRE

    Díaz Dellavalle, Paola; Dalla Rizza, Marco; Vázquez, Daniel; Castro, Marina

    2006-01-01

    La calidad del trigo para pan (Triticum aestivum L.) depende de la calidad y cantidad de las proteínas del gluten -gluteninas y gliadinas- las cuales constituyen 10 a 14% de las proteínas del grano. Varios parámetros cuantitativos, como el contenido total de proteínas de la harina, el contenido de proteínas poliméricas presentes en el grano y la proporción de gluteninas y gliadinas, están relacionados a la calidad panadera. En este trabajo se presenta la caracterización de las gluteninas de a...

  5. Identificación de proteínas de cubierta y de membrana en merozoitos de Plasmodium falciparum

    Directory of Open Access Journals (Sweden)

    Enid Rivadeneira

    1987-06-01

    Full Text Available Se identificaron proteínas que se desprenden fácilmente del merozoíto (probablemente proteinas de cubierta y proteínas intrínsecas de la membrana por el fraccionamiento de parásitos marcados endógenamente y purificados. La marcación continua durante todo el ciclo aseguró la identificación de las proteínas independientemente de su tiempo de síntesis. Este método permitió detectar proteínas de membrana, independientemente de su suceptibilidad a la digestión enzimática o a la marcación exógena. Se identificaron 4 proteínas de 100, 75, 50 y 45 KD que probablemente son constituyentes de la cubierta del merozoíto. En la fracción de membranas, solubles en detergente, se detectan 6 proteínas principales de 225, 86, 82, 75, 72 y 40 KD y 4 proteínas menores de 200, 69, 45 y 43 KD. Este trabajo es una contribución a la caracterización de la superficie del merozoíto de Plasmodium falciparum.

  6. Effect of antimony on the deep-level traps in GaInNAsSb thin films

    Energy Technology Data Exchange (ETDEWEB)

    Islam, Muhammad Monirul, E-mail: islam.monir.ke@u.tsukuba.ac.jp; Miyashita, Naoya; Ahsan, Nazmul; Okada, Yoshitaka [Research Center for Advanced Science and Technology (RCAST), The University of Tokyo, 4-6-1 Komaba, Meguro ku, Tokyo 153-8904 (Japan); Sakurai, Takeaki; Akimoto, Katsuhiro [Institute of Applied Physics, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573 (Japan)

    2014-09-15

    Admittance spectroscopy has been performed to investigate the effect of antimony (Sb) on GaInNAs material in relation to the deep-level defects in this material. Two electron traps, E1 and E2 at an energy level 0.12 and 0.41 eV below the conduction band (E{sub C}), respectively, were found in undoped GaInNAs. Bias-voltage dependent admittance confirmed that E1 is an interface-type defect being spatially localized at the GaInNAs/GaAs interface, while E2 is a bulk-type defect located around mid-gap of GaInNAs layer. Introduction of Sb improved the material quality which was evident from the reduction of both the interface and bulk-type defects.

  7. Examination of Frameworks for Safe Integration of Intelligent Small UAS into the NAS

    Science.gov (United States)

    Logan, Michael J.

    2012-01-01

    This paper discusses a proposed framework for the safe integration of small unmanned aerial systems (sUAS) into the National Airspace System (NAS). The paper briefly examines the potential uses of sUAS to build an understanding of the location and frequency of potential future flight operations based on the future applications of the sUAS systems. The paper then examines the types of systems that would be required to meet the application-level demand to determine "classes" of platforms and operations. A framework for categorization of the "intelligence" level of the UAS is postulated for purposes of NAS integration. Finally, constraints on the intelligent systems are postulated to ensure their ease of integration into the NAS.

  8. Documents assignment to archival fonds in research institutions of the NAS of Ukraine

    Directory of Open Access Journals (Sweden)

    Sichova O.

    2015-01-01

    Full Text Available The article analyzes the main aspects of the assignment of the records of research institutions of the NAS of Ukraine to archival fonds, in particular, the records assignment to archival fonds according to certain characteristics, archival fonds creation in accordance with the scientific principles of provenance, continuity and archival fond integrity. Shown are the features of the internal systematization of the documents of research institutions of the NAS of Ukraine, caused by the specifics of the institutions functions. Illustrated are the examples of institutional archival fonds acquiring names and the conditions leading to their renaming. Analyzed is the procedure of a chronological scope fixation of a research institution of the NAS of Ukraine archival fond

  9. Patina????o: uma alternativa nas aulas de educa????o f??sica

    OpenAIRE

    Pardo, Cindya Katerine

    2016-01-01

    Introdu????o: A patina????o ?? sem duvida um fen??meno social. Nas crian??as, apresenta-se como um jogo motivante, de vertigem, produz sensa????es de domina????o do medo de cair e da velocidade. Desenvolve v??rios aspectos psicomotores que s??o trabalhados nas aulas de educa????o f??sica. Objetivo: Apresentar e incluir a patina????o no ambiente escolar, no ensino fundamental, como uma alternativa para as aulas de Educa????o F??sica. Materiais e M??todos: Foi realizada uma revis??o bibliogr??f...

  10. Solving sparse linear least squares problems on some supercomputers by using large dense blocks

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Ostromsky, T; Sameh, A

    1997-01-01

    technique is preferable to sparse matrix technique when the matrices are not large, because the high computational speed compensates fully the disadvantages of using more arithmetic operations and more storage. For very large matrices the computations must be organized as a sequence of tasks in each......Efficient subroutines for dense matrix computations have recently been developed and are available on many high-speed computers. On some computers the speed of many dense matrix operations is near to the peak-performance. For sparse matrices storage and operations can be saved by operating only...... and storing only nonzero elements. However, the price is a great degradation of the speed of computations on supercomputers (due to the use of indirect addresses, to the need to insert new nonzeros in the sparse storage scheme, to the lack of data locality, etc.). On many high-speed computers a dense matrix...

  11. Depósito legal nas bibliotecas portuguesas

    OpenAIRE

    Fiolhais, Carlos

    2007-01-01

    O modelo de depósito legal nas bibliotecas portuguesas é questionado face às dificuldades financeiras e de vária ordem com que as mesmas se deparam, defende-se uma racionalização do depósito legal e uma tomada de posição pela Biblioteca Nacional entidade gestora do sistema.

  12. First-principles study on structure stabilities of α-S and Na-S battery systems

    Science.gov (United States)

    Momida, Hiroyoshi; Oguchi, Tamio

    2014-03-01

    To understand microscopic mechanisms of charge and discharge reactions in Na-S batteries, there has been increasing needs to study fundamental atomic and electronic structures of elemental S as well as that of Na-S phases. The most stable form of S is known to be an orthorhombic α-S crystal at ambient temperature and pressure, and α-S consists of puckered S8 rings which crystallize in space group Fddd . In this study, the crystal structure of α-S is examined by using first-principles calculations with and without the van der Waals interaction corrections of Grimme's method, and results clearly show that the van der Waals interactions between the S8 rings have crucial roles on cohesion of α-S. We also study structure stabilities of Na2S, NaS, NaS2, and Na2S5 phases with reported crystal structures. Using calculated total energies of the crystal structure models, we estimate discharge voltages assuming discharge reactions from 2Na+ xS -->Na2Sx, and discharge reactions in Na/S battery systems are discussed by comparing with experimental results. This work was partially supported by Elements Strategy Initiative for Catalysts and Batteries (ESICB) of Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan.

  13. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Directory of Open Access Journals (Sweden)

    De K.

    2016-01-01

    Full Text Available The Large Hadron Collider (LHC, operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed AnalysisWorkload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF, is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF’s Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  14. Influência de Marx nas músicas de John Lennon

    Directory of Open Access Journals (Sweden)

    Roseli Coutinho dos Santos Nunes

    2014-11-01

    Full Text Available Apresenta a influência do filósofo Karl Marx nas músicas abertamente políticas Revolution (1968, Working Class Hero (1970 e Power to the People (1971, cuja principal força criativa na composição e gravação foi John Lennon, o beatle mais envolvido com a teoria marxista. Apresenta a influência do modo de pensar marxista nos muitos trabalhos dos Beatles: mudar o modo que as pessoas pensam acerca do mundo para criar um mundo melhor e mais justo e, nas últimas obras, atrair a atenção para a desigualdade entre as classes sociais.

  15. Alteração nas frações das proteínas miofibrilares e maciez do músculo Longissimus de bovinos no período post mortem

    OpenAIRE

    Santos,Gilmara Bruschi; Ramos,Paulo Roberto Rodrigues; Spim,Jeison Solano

    2014-01-01

    Objetivou-se com o estudo identificar por eletroforese as mudanças nas frações das proteínas musculares durante período postmortem de bovinos de diferentes grupos genéticos e analisar a maciez da carne em amostras resfriadas por 24 horas (não maturadas) e maturadas por 7 dias. Foram utilizadas amostras do musculo Longissimus de quarenta e oito bovinos pertencentes a 4 grupos genéticos: 12 Nelore; 12 cruzados ½ Nelore ½ Aberdeen-Angus x Brahman; 12 Brangus; 12 cruzados ½ nelore ½ Aberdeen-Angu...

  16. Ubiquity and diversity of heterotrophic bacterial nasA genes in diverse marine environments.

    Directory of Open Access Journals (Sweden)

    Xuexia Jiang

    Full Text Available Nitrate uptake by heterotrophic bacteria plays an important role in marine N cycling. However, few studies have investigated the diversity of environmental nitrate assimilating bacteria (NAB. In this study, the diversity and biogeographical distribution of NAB in several global oceans and particularly in the western Pacific marginal seas were investigated using both cultivation and culture-independent molecular approaches. Phylogenetic analyses based on 16S rRNA and nasA (encoding the large subunit of the assimilatory nitrate reductase gene sequences indicated that the cultivable NAB in South China Sea belonged to the α-Proteobacteria, γ-Proteobacteria and CFB (Cytophaga-Flavobacteria-Bacteroides bacterial groups. In all the environmental samples of the present study, α-Proteobacteria, γ-Proteobacteria and Bacteroidetes were found to be the dominant nasA-harboring bacteria. Almost all of the α-Proteobacteria OTUs were classified into three Roseobacter-like groups (I to III. Clone library analysis revealed previously underestimated nasA diversity; e.g. the nasA gene sequences affiliated with β-Proteobacteria, ε-Proteobacteria and Lentisphaerae were observed in the field investigation for the first time, to the best of our knowledge. The geographical and vertical distributions of seawater nasA-harboring bacteria indicated that NAB were highly diverse and ubiquitously distributed in the studied marginal seas and world oceans. Niche adaptation and separation and/or limited dispersal might mediate the NAB composition and community structure in different water bodies. In the shallow-water Kueishantao hydrothermal vent environment, chemolithoautotrophic sulfur-oxidizing bacteria were the primary NAB, indicating a unique nitrate-assimilating community in this extreme environment. In the coastal water of the East China Sea, the relative abundance of Alteromonas and Roseobacter-like nasA gene sequences responded closely to algal blooms, indicating

  17. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M.; Banner, D. [Electricite de France (EDF)- R and D Division, 92 - Clamart (France)

    2003-07-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  18. Performance Evaluation of an Intel Haswell- and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

    Science.gov (United States)

    Saini, Subhash; Hood, Robert T.; Chang, Johnny; Baron, John

    2016-01-01

    We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5- 2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.

  19. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Swaminarayan, Sriram [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Kadau, Kai [Los Alamos National Laboratory; Fossum, Gordon C [IBM CORPORATION

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  20. Lipoproteínas remanentes aterogénicas en humanos

    Directory of Open Access Journals (Sweden)

    Regina Wikinski

    2010-08-01

    Full Text Available La lipoproteínas remanentes (RLPs son el producto de la lipólisis de los triglicéridos transportados por las lipoproteínas de baja densidad (VLDL de origen hepático e intestinal y de los quilomicrones intestinales. Dicha lipólisis es catalizada por la lipoproteína lipasa y se produce en pasos sucesivos, de manera que los productos son heterogéneos. Su concentración plasmática en ayunas es pequeña en pacientes normolipémicos y aumenta en el estado post-prandial. Las alteraciones genéticas en subtipos de su componente Apo-E aumentan notablemente su concentración plasmática y producen el fenotipo de disbetalipoproteinemia. Se las considera aterogénicas porque injurian el endotelio, sufren estrés oxidativo, son captadas por los macrófagos en el subendotelio vascular y generan las células espumosas que son precursoras de ateromas. Su origen metabólico, como productos de varios tipos de lipoproteínas, explican su estructura heterogénea, sus concentraciones plasmáticas variables y las dificultades metodológicas que dificultan su inclusión en el perfil lipoproteico como parte de los estudios epidemiológicos. Los últimos avances en los estudios metabólicos y la actualización de su papel clínico, justifican una revisión de los conocimientos actuales.

  1. Investigations of the Optical Properties of GaNAs Alloys by First-Principle.

    Science.gov (United States)

    Borovac, Damir; Tan, Chee-Keong; Tansu, Nelson

    2017-12-11

    We present a Density Functional Theory (DFT) analysis of the optical properties of dilute-As GaN 1-x As x alloys with arsenic (As) content ranging from 0% up to 12.5%. The real and imaginary parts of the dielectric function are investigated, and the results are compared to experimental and theoretical values for GaN. The analysis extends to present the complex refractive index and the normal-incidence reflectivity. The refractive index difference between GaN and GaNAs alloys can be engineered to be up to ~0.35 in the visible regime by inserting relatively low amounts of As-content into the GaN system. Thus, the analysis elucidates on the birefringence of the dilute-As GaNAs alloys and comparison to other experimentally characterized III-nitride systems is drawn. Our findings indicate the potential of GaNAs alloys for III-nitride based waveguide and photonic circuit design applications.

  2. Padrões de refluxo nas veias safenas em homens com insuficiência venosa crônica

    Directory of Open Access Journals (Sweden)

    Carlos Alberto Engelhorn

    Full Text Available Resumo Contexto A insuficiência venosa crônica (IVCr é frequente e predomina nas mulheres, mas ainda há poucas informações sobre o refluxo nas veias safenas na população masculina. Objetivos Identificar os diferentes padrões de refluxo nas veias safenas magnas (VSMs e parvas (VSPs em homens, correlacionando esses dados com a apresentação clínica conforme a classificação Clínica, Etiológica, Anatômica e Fisiopatológica (CEAP. Métodos Foram avaliados 369 membros inferiores de 207 homens pela ultrassonografia vascular (UV com diagnóstico clínico de IVCr primária. As variáveis analisadas foram a classificação CEAP, o padrão de refluxo nas VSMs e VSPs e a correlação entre os dois. Resultados Nos 369 membros avaliados, 72,9% das VSMs apresentaram refluxo com predominância do padrão segmentar (33,8%. Nas VSPs, 16% dos membros inferiores analisados apresentaram refluxo, sendo o mais frequente o padrão distal (33,9%. Dos membros classificados como C4, C5 e C6, 100% apresentaram refluxo na VSM com predominância do refluxo proximal (25,64%, e 38,46% apresentaram refluxo na VSP com equivalência entre os padrões distal e proximal (33,3%. Refluxo na junção safeno-femoral (JSF foi detectado em 7,1% dos membros nas classes C0 e C1, 35,6% nas classes C2 e C3, e 64,1% nas classes C4 a C6. Conclusões O padrão de refluxo segmentar é predominante na VSM, e o padrão de refluxo distal é predominante na VSP. A ocorrência de refluxo na JSF é maior em pacientes com IVCr mais avançada.

  3. Detecção de proteínas imunorreativas de Rickettsia sp. cepa Mata Atlântica

    Directory of Open Access Journals (Sweden)

    Caroline S. Oliveira

    Full Text Available RESUMO: A Febre Maculosa Brasileira (FMB é uma doença infecciosa, transmitida por carrapatos ao homem. Uma nova riquetsiose humana foi descrita como causadora de Febre Maculosa no Estado de São Paulo, sendo denominada de Rickettsia sp. cepa Mata Atlântica. O presente trabalho teve como objetivo detectar e identificar proteínas com potencial de estimular o sistema imune de hospedeiro mamífero, desta nova cepa descrita. Para tanto, foi realizado a extração proteica total de Rickettsia sp. cepa Mata Atlântica. As proteínas extraídas foram fracionadas por eletroforese. As bandas proteicas foram transferidas para membranas de nitrocelulose por migração elétrica e submetidas à técnica de Western-blot, para detecção proteica. Ao todo sete proteínas imunorreativas foram detectadas. Duas proteínas apresentaram maior abundancia, com peso molecular, de 200 e 130 kDa respectivamente. Através da comparação de mapas proteômicos existentes e pelo peso molecular que estas proteínas apresentaram, sugere-se que as duas proteínas detectadas representem rOmpA (200 kDa e rOmpB (130 kDa. As demais proteínas detectadas apresentaram menor ocorrência e peso molecular inferior a 78 kDa, podendo representar membros da família de antígenos de superfície celular (Sca - Surface cell antigen. As proteínas detectadas poderão servir como base de estudo na elaboração de métodos diagnósticos sensíveis e específicos, no desenvolvimento de vacinas, além de possibilitarem novos estudos para terapias mais eficazes.

  4. An efficient parallel algorithm for matrix-vector multiplication

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, B.; Leland, R.; Plimpton, S.

    1993-03-01

    The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in the well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.

  5. Unmanned Aircraft Systems (UAS) Integration in the National Airspace System (NAS) Project FY17 Annual Review

    Science.gov (United States)

    Sakahara, Robert; Hackenberg, Davis; Johnson, William

    2017-01-01

    This presentation was presented to the Integrated Aviation Systems Program at the FY17 Annual Review of the UAS-NAS project. The presentation captures the overview of the work completed by the UAS-NAS project and its subprojects.

  6. New Mexico High School Supercomputing Challenge, 1990--1995: Five years of making a difference to students, teachers, schools, and communities. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Foster, M.; Kratzer, D.

    1996-02-01

    The New Mexico High School Supercomputing Challenge is an academic program dedicated to increasing interest in science and math among high school students by introducing them to high performance computing. This report provides a summary and evaluation of the first five years of the program, describes the program and shows the impact that it has had on high school students, their teachers, and their communities. Goals and objectives are reviewed and evaluated, growth and development of the program are analyzed, and future directions are discussed.

  7. Parallel simulation of tsunami inundation on a large-scale supercomputer

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the

  8. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  9. A QUESTÃO DA MOBILIDADE URBANA NAS METRÓPOLES BRASILEIRAS

    Directory of Open Access Journals (Sweden)

    Valéria Pero

    2015-12-01

    Full Text Available RESUMO O tempo de deslocamento de casa ao trabalho tem se elevado substancialmente nas regiões metropolitanas brasileiras durante a última década. Esse fenômeno tem implicações fortes sobre o bem-estar dos indivíduos, porém as consequências desse problema não se distribuem uniformemente entre a população. O presente trabalho visa contribuir para o debate sobre a questão da mobilidade urbana nas metrópoles brasileiras analisando a evolução do tempo de deslocamento entre 1992 e 2013 e suas diferenças de acordo com características do trabalhador, como sexo, cor e renda per capita , e do posto de trabalho. Verifica-se que o aumento do tempo médio de deslocamento ocorreu a partir de 2003, caracterizando uma questão particularmente importante para as metrópoles brasileiras no terceiro milênio. Os trabalhadores com maiores tempos médios de deslocamento residem nas regiões metropolitanas do Rio de Janeiro e de São Paulo. Entretanto, as maiores taxas de crescimento ocorreram nas metrópoles do Pará, Salvador e Recife, sugerindo a necessidade de melhor direcionamento e planejamento de políticas públicas na mobilidade urbana. Considerando as diferenças socioeconômicas, destaca-se que os mais pobres e os mais ricos (extremos da distribuição de renda tendem a apresentar tempos de deslocamento menores do que os trabalhadores de famílias de renda média. Esse padrão se mantém ao longo do tempo, com aumento do tempo médio de deslocamento entre os mais pobres, mostrando uma face da desigualdade. Porém, o maior aumento ocorreu entre os mais ricos, colocando a questão da mobilidade urbana para além dos problemas de exclusão social.

  10. SUSTENTABILIDADE NAS CONSTRUÇÕES

    Directory of Open Access Journals (Sweden)

    Gabriela Siqueira Manhães

    2014-11-01

    Full Text Available O processo da área da construção civil é bastante heterogêneo, contemplando diferentes âmbitos de organização produtiva e de formas de comercialização de seus produtos finais, as construções. Nas construções, a complexidade e a indispensabilidade de planejamento e gerenciamento são agravadas pela crescente busca do mercado por maior qualidade em seu desenvolvimento e melhor desempenho do produto final. Isso pode não direcionar a comunicação entre os agentes envolvidos e racionalizar a construção e a edificação, mas também mostrar alternativas inteligentes e sustentáveis que respondam à necessidade de minimização dos impactos ambientais. A discussão que envolve os conceitos de construções inteligentes e sustentáveis, apesar de ser vista no meio acadêmico, parece ainda não encerrada e engloba os diferentes níveis de organização do indivíduo e dasociedade. O presente estudo tem como objetivo discutir o uso desses conceitos no processo construtivo e em seu resultado final, com a finalidade de identificar possíveis relações entre os mesmos e as suas contribuições no contexto da sustentabilidade da construção civil. Observou-se que as inovações tecnológicas dispostas nas várias etapas do processo (construção até o produto final, geraram soluções sustentáveis que deixam sua contribuição para amenizar os impactos no meio ambiente. O trabalho contribui para uma reflexão sobre conceitos de sustentabilidade dentro de uma visão mais integral para a arquitetura, abordando o processo e o produto da produção da arquitetura.

  11. Applications Performance Under MPL and MPI on NAS IBM SP2

    Science.gov (United States)

    Saini, Subhash; Simon, Horst D.; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    On July 5, 1994, an IBM Scalable POWER parallel System (IBM SP2) with 64 nodes, was installed at the Numerical Aerodynamic Simulation (NAS) Facility Each node of NAS IBM SP2 is a "wide node" consisting of a RISC 6000/590 workstation module with a clock of 66.5 MHz which can perform four floating point operations per clock with a peak performance of 266 Mflop/s. By the end of 1994, 64 nodes of IBM SP2 will be upgraded to 160 nodes with a peak performance of 42.5 Gflop/s. An overview of the IBM SP2 hardware is presented. The basic understanding of architectural details of RS 6000/590 will help application scientists the porting, optimizing, and tuning of codes from other machines such as the CRAY C90 and the Paragon to the NAS SP2. Optimization techniques such as quad-word loading, effective utilization of two floating point units, and data cache optimization of RS 6000/590 is illustrated, with examples giving performance gains at each optimization step. The conversion of codes using Intel's message passing library NX to codes using native Message Passing Library (MPL) and the Message Passing Interface (NMI) library available on the IBM SP2 is illustrated. In particular, we will present the performance of Fast Fourier Transform (FFT) kernel from NAS Parallel Benchmarks (NPB) under MPL and MPI. We have also optimized some of Fortran BLAS 2 and BLAS 3 routines, e.g., the optimized Fortran DAXPY runs at 175 Mflop/s and optimized Fortran DGEMM runs at 230 Mflop/s per node. The performance of the NPB (Class B) on the IBM SP2 is compared with the CRAY C90, Intel Paragon, TMC CM-5E, and the CRAY T3D.

  12. A criticality safety analysis code using a vectorized Monte Carlo method on the HITAC S-810 supercomputer

    International Nuclear Information System (INIS)

    Morimoto, Y.; Maruyama, H.

    1987-01-01

    A vectorized Monte Carlo criticality safety analysis code has been developed on the vector supercomputer HITAC S-810. In this code, a multi-particle tracking algorithm was adopted for effective utilization of the vector processor. A flight analysis with pseudo-scattering was developed to reduce the computational time needed for flight analysis, which represents the bulk of computational time. This new algorithm realized a speed-up of factor 1.5 over the conventional flight analysis. The code also adopted the multigroup cross section constants library of the Bodarenko type with 190 groups, with 132 groups being for fast and epithermal regions and 58 groups being for the thermal region. Evaluation work showed that this code reproduce the experimental results to an accuracy of about 1 % for the effective neutron multiplication factor. (author)

  13. An Integrated Gate Turnaround Management Concept Leveraging Big Data Analytics for NAS Performance Improvements

    Science.gov (United States)

    Chung, William W.; Ingram, Carla D.; Ahlquist, Douglas Kurt; Chachad, Girish H.

    2016-01-01

    "Gate Turnaround" plays a key role in the National Air Space (NAS) gate-to-gate performance by receiving aircraft when they reach their destination airport, and delivering aircraft into the NAS upon departing from the gate and subsequent takeoff. The time spent at the gate in meeting the planned departure time is influenced by many factors and often with considerable uncertainties. Uncertainties such as weather, early or late arrivals, disembarking and boarding passengers, unloading/reloading cargo, aircraft logistics/maintenance services and ground handling, traffic in ramp and movement areas for taxi-in and taxi-out, and departure queue management for takeoff are likely encountered on the daily basis. The Integrated Gate Turnaround Management (IGTM) concept is leveraging relevant historical data to support optimization of the gate operations, which include arrival, at the gate, departure based on constraints (e.g., available gates at the arrival, ground crew and equipment for the gate turnaround, and over capacity demand upon departure), and collaborative decision-making. The IGTM concept provides effective information services and decision tools to the stakeholders, such as airline dispatchers, gate agents, airport operators, ramp controllers, and air traffic control (ATC) traffic managers and ground controllers to mitigate uncertainties arising from both nominal and off-nominal airport gate operations. IGTM will provide NAS stakeholders customized decision making tools through a User Interface (UI) by leveraging historical data (Big Data), net-enabled Air Traffic Management (ATM) live data, and analytics according to dependencies among NAS parameters for the stakeholders to manage and optimize the NAS performance in the gate turnaround domain. The application will give stakeholders predictable results based on the past and current NAS performance according to selected decision trees through the UI. The predictable results are generated based on analysis of the

  14. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  15. Lipoproteínas e inflamação na esclerose múltipla

    OpenAIRE

    Cascais, Maria João Coelho Melo

    2010-01-01

    Preâmbulo Os processos inflamatórios induzem alterações marcadas do metabolismo das lipoproteínas plasmáticas e estas, por sua vez, regulam as reacções imunitárias. Dadas as muitas relações existentes entre imunidade inata e adquirida e o metabolismo das lipoproteínas, investigámos neste trabalho a sua possível relevância para a compreensão da Esclerose Múltipla (EM), uma doença neuroinflamatória e neurodegenrativa do Sistema Nervoso Central (SNC). Como será evidente ao l...

  16. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone; Manzano Franco, Joseph B.

    2012-12-31

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, we introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.

  17. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan; Mills, Richard T.

    2012-04-18

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors per realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.

  18. Benefits of a Unified LaSRS++ Simulation for NAS-Wide and High-Fidelity Modeling

    Science.gov (United States)

    Glaab, Patricia; Madden, Michael

    2014-01-01

    The LaSRS++ high-fidelity vehicle simulation was extended in 2012 to support a NAS-wide simulation mode. Since the initial proof-of-concept, the LaSRS++ NAS-wide simulation is maturing into a research-ready tool. A primary benefit of this new capability is the consolidation of the two modeling paradigms under a single framework to save cost, facilitate iterative concept testing between the two tools, and to promote communication and model sharing between user communities at Langley. Specific benefits of each type of modeling are discussed along with the expected benefits of the unified framework. Current capability details of the LaSRS++ NAS-wide simulations are provided, including the visualization tool, live data interface, trajectory generators, terminal routing for arrivals and departures, maneuvering, re-routing, navigation, winds, and turbulence. The plan for future development is also described.

  19. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    Science.gov (United States)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are

  20. Frequency of neonatal abstinence syndrome (NAS and type of the narcotic substance in neonates born from drug addicted mothers

    Directory of Open Access Journals (Sweden)

    Fatemeh Nayeri

    2015-02-01

    Full Text Available Abstract Background and objective: NAS is a combination of signs and symptoms that due to physical and mental dependency, develops in neonates born from drug addicted mothers. The onset of NAS varies in accordance with the type, amount, frequency and duration of substance used. Because of diverse and unclear pattern of substance abuse in Iranian addicted pregnant mothers in comparison with western countries, this multi-center study has been designed to evaluate NAS in neonates born from drug addicted mothers. Material and method: A cross sectional study was carried out on newborns of narcotic addicted mothers during the first six months of 2008. The newborn’s status and clinical signs were checked by physical examination and scored by the Finnegan scoring system. Results: In this study 100 neonates born from narcotic addicted mothers were examined; the most used narcotic was crack (36%. 60% of neonates showed signs of NAS. The most prevalent signs of NAS were increased muscle tonicity (60%/7, irritability (59%/6 and increased moro reflex (51%/8. Neonates born from crack abusers, in comparison with other drugs, were significantly at risk of NAS (100% vs.87%, p

  1. PROPRIEDADES FUNCIONAIS DAS PROTEÍNAS DE AMÊNDOAS DA MUNGUBA (Pachira aquatica Aubl.

    Directory of Open Access Journals (Sweden)

    BERNADETE DE LOURDES DE ARAÚJO SILVA

    2015-03-01

    Full Text Available RESUMO A semente da munguba (Pachira aquatica Aubl. contém amêndoas que exibem um conteúdo excelente de óleo e um percentualsignificativo em proteínas. Propositou-se determinar algumas propriedades funcionais das proteínas de amêndoas damunguba com o objetivo de instituir sua utilização na indústria de alimentos. O teor lipídico foi de 46,62%, o proteico de 13,75% e na forma de torta apresentou um índice de 28,27% de proteínas. Obtiveram-se doisisolados proteicos, o IP 2,0 e o IP 10,0, decorrentes de duas condições de pH (2,0 e 10,0. Na obtenção dos isolados proteicos, os índices em proteínas extraídas foram de 38,52% para o IP 2,0 e 82,06% para o IP 10,0. Os índices de proteínas recuperadas através da precipitação isoelétrica foram de 23,35% para o IP 2,0 e de 70,94%para o IP 10,0, em pH 5,0. As propriedades funcionais exibiram solubilidade mínima em pH 5,0, no pontoisoelétrico (pI, sendo mais elevada em pH ácido e alcalino do pI. As melhores capacidades de absorçãode água e de óleo exibidas foram para o IP 10,0. As propriedades emulsificantes foram dependentes do pH para os dois isolados, e o IP 10,0 indicou melhores resultados. As propriedades funcionais estudadas permitem o emprego dos isolados proteicos em produtos alimentícios que requerem alta solubilidade, tais como os produtos de panificação, massas em geral, sopas desidratadas e molhos, produtos que exigem desempenho na absorção do óleo, como as carnes simuladas, e em produtos que requerem poderes emulsificantes.

  2. Biología molecular de las proteínas accesorias del Virus de la Inmunodeficiencia Humana tipo 1 (VIH-1

    Directory of Open Access Journals (Sweden)

    Guillermo Cervantes Acosta

    2005-01-01

    Full Text Available El Virus de la Inmunodeficiencia Humana tipo 1 (VIH-1 es un retrovirus complejo que codifica 15 distintas proteínas. Algunas de estas proteínas no son esenciales para la replicación viral. No obstante estas proteínas accesorias participan en diferentes estadios del ciclo viral, incluyendo la regulación de la replicación, la infectividad y la salida viral. Los mecanismos por los cuales estas proteínas accesorias orientan ciertos eventos virales y celulares y su interacción con proteínas celulares se ha convertido en un tópico muy importante en investigación. Esta revisión expone entonces cómo la conformación estructural de las proteínas accesorias del VIH, Nef, Vpr, Vif y Vpu está relacionada con sus fenotipos putativos. Finalmente, nuestros resultados de previos estudios serán también presentados y discutidos a la luz de recientes desarrollos en el área.

  3. Meeting of Experts on NASA's Unmanned Aircraft System (UAS) Integration in the National Airspace Systems (NAS) Project

    Science.gov (United States)

    Wolfe, Jean; Bauer, Jeff; Bixby, C.J.; Lauderdale, Todd; Shively, Jay; Griner, James; Hayhurst, Kelly

    2010-01-01

    Topics discussed include: Aeronautics Research Mission Directorate Integrated Systems Research Program (ISRP) and UAS Integration in the NAS Project; UAS Integration into the NAS Project; Separation Assurance and Collision Avoidance; Pilot Aircraft Interface Objectives/Rationale; Communication; Certification; and Integrated Tests and Evaluations.

  4. Dois caras numa garagem: o cinema alternativo dos fãs de Guerra nas Estrelas

    Directory of Open Access Journals (Sweden)

    Tietzmann, Roberto

    2003-01-01

    Full Text Available A série de filmes Guerra nas Estrelas faz parte do imaginário popular de uma forma desproporcional em relação às centenas de filmes lançados anualmente pela indústria audiovisual. Isso encontra eco nas bilheterias dos filmes da série e na fidelidade dos fâs ao universo de fantasia criado por George Lucas.

  5. CONTROLE GERENCIAL: UMA ANÁLISE NAS EMPRESAS CONTÁBEIS DA CIDADE DE CAICÓ/RN.

    Directory of Open Access Journals (Sweden)

    Hugo Azevedo Rangel de Morais

    2016-07-01

    Full Text Available O controle gerencial de uma empresa é necessário para um eficiente desenvolvimento interno. Com as mudanças constantes das legislações e a evolução da tecnologia de informação nas empresas contábeis, é essencial o controle gerencial eficiente, através dele será possível identificar como está a vida da empresa no seu dia a dia. Neste trabalho é mostrada a importância do controle gerencial para as empresas contábeis, situada na cidade de Caicó/RN, demonstrando a sua importância para auxiliar nas tomadas de decisões. O presente estudo tem como objetivo geral de analisar se nas empresas contábeis da cidade de Caicó existe um controle gerencial que os auxiliem nas tomadas de decisões, verificando se existem a prestação de serviço de contabilidade gerencial e analisando a importância do controle gerencial para os empresários. A contextualização do tema trata-se de pesquisas bibliográficas. A metodologia desenvolvida na pesquisa é classificada como descritiva, do ponto de vista de sua natureza é uma pesquisa aplicada, tendo uma abordagem qualitativa por ter caráter exploratório, já no que se refere aos procedimentos técnicos trata-se de um levantamento. Observou que os empresários tem conhecimento em relação à importância do controle gerencial, a maioria coloca em pratica obtendo uma boa classificação dos controles com confiabilidade para auxiliar nas tomadas de decisões, foi detectado que a maioria das empresas faz um planejamento dos objetivos a serem controlados.

  6. Emprego dos gangliosidos do cortex cerebral nas neuropatias perifericas

    Directory of Open Access Journals (Sweden)

    James Pitagoras De Mattos

    1981-12-01

    Full Text Available Os autores registram a experiência pessoal com o emprego de gangliosídios do cortex cerebral nas neuropatias periféricas. O ensaio clínico e eletromiográfico revelou-se eficaz em 30 dos 40 casos tratados. Enfatizam os melhores resultados em casos de paralisias faciais periféricas.

  7. Advances and new functions of VCSEL photonics

    Science.gov (United States)

    Koyama, Fumio

    2014-11-01

    A vertical cavity surface emitting laser (VCSEL) was born in Japan. The 37 years' research and developments opened up various applications including datacom, sensors, optical interconnects, spectroscopy, optical storages, printers, laser displays, laser radar, atomic clock and high power sources. A lot of unique features have been already proven, such as low power consumption, a wafer level testing and so on. The market of VCSELs has been growing up rapidly and they are now key devices in local area networks based on multi-mode optical fibers. Optical interconnections in data centers and supercomputers are attracting much interest. In this paper, the advances on VCSEL photonics will be reviewed. We present the high-speed modulation of VCSELs based on a coupled cavity structure. For further increase in transmission capacity per fiber, the wavelength engineering of VCSEL arrays is discussed, which includes the wavelength stabilization and wavelength tuning based on a micro-machined cantilever structure. We also address a lateral integration platform and new functions, including high-resolution beam scanner, vortex beam creation and large-port free space wavelength selective switch with a Bragg reflector waveguide.

  8. Estado e controle nas prisões

    OpenAIRE

    Batista, Analía Soria

    2009-01-01

    Este artigo analisa o problema da produção do controle e da ordem em prisões brasileiras, utilizando as perspectivas histórica e sociológica, e levanta a hipóteses de que, no Brasil, convivem duas modalidades de construção da ordem e do controle nas prisões. Uma delas, minoritária, baseia-se na prerrogativa do Estado na gestão do dia a dia prisional. A outra é relativa à negociação da pacificação do presídio entre o Estado e as lideranças dos presos. Embora, no primeiro caso, a prerrogativa d...

  9. UAS Integration in the NAS Project: Integrated Test and Evaluation (IT&E) Flight Test 3. Revision E

    Science.gov (United States)

    Marston, Michael

    2015-01-01

    The desire and ability to fly Unmanned Aircraft Systems (UAS) in the National Airspace System (NAS) is of increasing urgency. The application of unmanned aircraft to perform national security, defense, scientific, and emergency management are driving the critical need for less restrictive access by UAS to the NAS. UAS represent a new capability that will provide a variety of services in the government (public) and commercial (civil) aviation sectors. The growth of this potential industry has not yet been realized due to the lack of a common understanding of what is required to safely operate UAS in the NAS. NASA's UAS Integration into the NAS Project is conducting research in the areas of Separation Assurance/Sense and Avoid Interoperability, Human Systems Integration (HSI), and Communication to support reducing the barriers of UAS access to the NAS. This research is broken into two research themes namely, UAS Integration and Test Infrastructure. UAS Integration focuses on airspace integration procedures and performance standards to enable UAS integration in the air transportation system, covering Sense and Avoid (SAA) performance standards, command and control performance standards, and human systems integration. The focus of Test Infrastructure is to enable development and validation of airspace integration procedures and performance standards, including the integrated test and evaluation. In support of the integrated test and evaluation efforts, the Project will develop an adaptable, scalable, and schedulable relevant test environment capable of evaluating concepts and technologies for unmanned aircraft systems to safely operate in the NAS. To accomplish this task, the Project will conduct a series of Human-in-the-Loop and Flight Test activities that integrate key concepts, technologies and/or procedures in a relevant air traffic environment. Each of the integrated events will build on the technical achievements, fidelity and complexity of the previous tests and

  10. Influencia del pH en la estabilidad de emulsiones elaboradas con proteínas de salvado de arroz

    Directory of Open Access Journals (Sweden)

    Laura Maldonado

    2011-12-01

    Full Text Available Si bien las proteínas de origen animal en muchas instancias pueden tener mejores características funcionales que las proteínas de origen vegetal, el incremento de su costo puede favorecer al uso expansivo de las fitoproteínas como reemplazo. Una de las fuentes de proteínas de origen vegetal es el salvado de arroz, que se obtiene como subproducto en el proceso de pulido del arroz integral (Oryza santiva L para producir el arroz blanco. Se estudió los procesos de cremado, floculación y coalescencia de emulsiones preparadas con proteínas del salvado de arroz a pH 6,0 y 8,0. La obtención de las proteínas del salvado de arroz se realizó en un medio alcalino, partiendo de salvado de arroz desengrasado. El proceso de desestabilización de las emulsiones se analizó a partir de los datos obtenidos por el método de retrodispersión de luz mediante un equipo Turbiscan 2000; en el caso del cremado los datos fueron ajustados a una cinética bifásica con una componente de segundo orden (hiperbólica y otra con un comportamiento sigmoidal. Las emulsiones preparadas a pH 8 presentaron una mayor estabilidad frente al cremado, mientras que los procesos de floculación y coalescencia no fueron influenciados por los distintos valores de pH.

  11. Producción de proteínas recombinantes de Plasmodium falciparum en Escherichia coli

    Directory of Open Access Journals (Sweden)

    Ángela Patricia Guerra

    2016-04-01

    Conclusión. El uso de cepas de E. coli genéticamente modificadas fue fundamental para alcanzar altos niveles de expresión de las cuatro proteínas recombinantes evaluadas y permitió obtener dos de ellas en forma soluble. La estrategia utilizada permitió expresar cuatro proteínas recombinantes de P. falciparum en cantidad suficiente para inmunizar ratones y producir anticuerpos policlonales y, además, conservar proteína pura y soluble de dos de ellas para ensayos futuros.

  12. Contactless electroreflectance and photoluminescence of InAs quantum dots with GaInNAs barriers grown on GaAs substrate

    International Nuclear Information System (INIS)

    Motyka, M.; Kudrawiec, R.; Misiewicz, J.; Pucicki, D.; Tlaczala, M.; Fischer, M.; Marquardt, B.; Forchel, A.

    2007-01-01

    InAs quantum dots (QDs) with GaInNAs barriers grown on (001) GaAs substrate by molecular beam epitaxy have been studied by contactless electroreflectance (CER) and photoluminescence (PL) spectroscopies. It has been observed that the overgrowth of self-organized InAs QDs with GaInNAs layers effectively tunes the QD emission to the 1.3 μm spectral region. In case of PL spectra only one peak related to QD emission has been observed. In the case of CER spectra, in addition to a CER feature corresponding to the QD ground state, a rich spectrum of CER resonances related to optical transitions in InAs/GaInNAs/GaAs QW has been observed. It has been concluded that the application of GaInNAs instead InGaAs leads to better control of emission wavelength from InAs QDs since strains in GaInNAs can be tuned from compressive to tensile. (copyright 2007 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  13. Evaluating the networking characteristics of the Cray XC-40 Intel Knights Landing-based Cori supercomputer at NERSC

    Energy Technology Data Exchange (ETDEWEB)

    Doerfler, Douglas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Austin, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cook, Brandon [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kandalla, Krishna [Cray Inc, Bloomington, MN (United States); Mendygral, Peter [Cray Inc, Bloomington, MN (United States)

    2017-09-12

    There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL, such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.

  14. Parallel processor programs in the Federal Government

    Science.gov (United States)

    Schneck, P. B.; Austin, D.; Squires, S. L.; Lehmann, J.; Mizell, D.; Wallgren, K.

    1985-01-01

    In 1982, a report dealing with the nation's research needs in high-speed computing called for increased access to supercomputing resources for the research community, research in computational mathematics, and increased research in the technology base needed for the next generation of supercomputers. Since that time a number of programs addressing future generations of computers, particularly parallel processors, have been started by U.S. government agencies. The present paper provides a description of the largest government programs in parallel processing. Established in fiscal year 1985 by the Institute for Defense Analyses for the National Security Agency, the Supercomputing Research Center will pursue research to advance the state of the art in supercomputing. Attention is also given to the DOE applied mathematical sciences research program, the NYU Ultracomputer project, the DARPA multiprocessor system architectures program, NSF research on multiprocessor systems, ONR activities in parallel computing, and NASA parallel processor projects.

  15. Péptidos bioactivos en proteínas de reserva

    Directory of Open Access Journals (Sweden)

    Millán, F.

    2000-10-01

    Full Text Available A review on the bioactive peptides described so far in storage proteins, mainly milk proteins, has been carried out. Bioactive peptides are small amino acid sequences inactives in the native protein, but that can be liberated after hydrolysis of these proteins and exert different functions. Among the main one are bioactive peptides with opioid activity, antagonistic opioid, immunomodulatory, antithrombotic, ion transporting or antihypertensive. The possible presence of these peptides in other protein source, mainly oilseed plants and their possible use is discussed.Se ha realizado una revisión de los péptidos bioactivos descr itos hasta e l momento en proteínas de reserva, principalmente de la leche. Los péptidos bioactivos son pequeñas secuencias aminoacidícas inactivas dentro de la proteína, pero que pueden ser liberados tras la hidrólisis de estas proteínas y ejercer diversas funciones. Entre los más abundantes destacan los péptidos con actividad opioide, opioide antagonista, antitrombótica, inmunomoduladora, transportadora de iones o hipotensora. Se discute la posible presencia de estos péptidos en otras fuentes proteicas, principalmente plantas oleaginosas y su posible aprovechamiento.

  16. Advanced Modulation Techniques for High-Performance Computing Optical Interconnects

    DEFF Research Database (Denmark)

    Karinou, Fotini; Borkowski, Robert; Zibar, Darko

    2013-01-01

    We experimentally assess the performance of a 64 × 64 optical switch fabric used for ns-speed optical cell switching in supercomputer optical interconnects. More specifically, we study four alternative modulation formats and detection schemes, namely, 10-Gb/s nonreturn-to-zero differential phase-...

  17. Scientific Discovery through Advanced Computing in Plasma Science

    Science.gov (United States)

    Tang, William

    2005-03-01

    Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research during the 21st Century. For example, the Department of Energy's ``Scientific Discovery through Advanced Computing'' (SciDAC) Program was motivated in large measure by the fact that formidable scientific challenges in its research portfolio could best be addressed by utilizing the combination of the rapid advances in super-computing technology together with the emergence of effective new algorithms and computational methodologies. The imperative is to translate such progress into corresponding increases in the performance of the scientific codes used to model complex physical systems such as those encountered in high temperature plasma research. If properly validated against experimental measurements and analytic benchmarks, these codes can provide reliable predictive capability for the behavior of a broad range of complex natural and engineered systems. This talk reviews recent progress and future directions for advanced simulations with some illustrative examples taken from the plasma science applications area. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by the combination of access to powerful new computational resources together with innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning a huge range in time and space scales. In particular, the plasma science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations

  18. Self-assembled GaInNAs/GaAsN quantum dot lasers: solid source molecular beam epitaxy growth and high-temperature operation

    Directory of Open Access Journals (Sweden)

    Yoon SF

    2006-01-01

    Full Text Available AbstractSelf-assembled GaInNAs quantum dots (QDs were grown on GaAs (001 substrate using solid-source molecular-beam epitaxy (SSMBE equipped with a radio-frequency nitrogen plasma source. The GaInNAs QD growth characteristics were extensively investigated using atomic-force microscopy (AFM, photoluminescence (PL, and transmission electron microscopy (TEM measurements. Self-assembled GaInNAs/GaAsN single layer QD lasers grown using SSMBE have been fabricated and characterized. The laser worked under continuous wave (CW operation at room temperature (RT with emission wavelength of 1175.86 nm. Temperature-dependent measurements have been carried out on the GaInNAs QD lasers. The lowest obtained threshold current density in this work is ∼1.05 kA/cm2from a GaInNAs QD laser (50 × 1,700 µm2 at 10 °C. High-temperature operation up to 65 °C was demonstrated from an unbonded GaInNAs QD laser (50 × 1,060 µm2, with high characteristic temperature of 79.4 K in the temperature range of 10–60 °C.

  19. Expresión diferencial de proteínas en Leishmania (Viannia panamensis asociadas con mecanismos de resistencia a antimoniato de meglumina

    Directory of Open Access Journals (Sweden)

    Ronald Guillermo Peláez

    2012-05-01

    Full Text Available Introducción. Los mecanismos de resistencia al antimonio pentavalente conocidos hasta el momento,se han descrito ampliamente en cepas del subgénero Leishmania, pero poco se sabe sobre lasproteínas involucradas en los mecanismos de resistencia presentes en cepas del subgénero Viannia,como Leishmania panamensis. Objetivo. Identificar proteínas diferencialmente expresadas entre las cepas de L. panamensis(UA140, sensible y resistente al antimonio pentavalente, y analizar el posible papel de estas proteínasen mecanismos de resistencia. Materiales y métodos. Las proteínas de las cepas, sensible y resistente al antimonio pentavalente, secompararon usando electroforesis bidimensional. Las proteínas con aumento de la expresión fueronaisladas e identificadas por espectrometría de masas mediante MALDI-TOF/TOF (Matrix AssistedLaser Desorption Ionization/Time of Flight. La expresión del ARNm de cinco de estas proteínas secuantificó mediante PCR en tiempo real. Resultados. Los geles bidimensionales de las cepas sensible y resistente detectaron 532±39 y 541±43manchas proteicas. Se encontraron 10 manchas con aumento de la expresión en la cepa resistente,identificadas como proteínas de choque térmico (Hsp60 mitocondrial, Hsp70 mitocondrial y citosólica,isomerasa de disulfuro, proteasa de cisteína, enolasa, factor de elongación 5-α, la subunidad 5-α delproteasoma y dos proteínas hipotéticas nombradas como Sp(2 y Sp(25. Conclusión. Este es el primer estudio llevado a cabo con una cepa resistente al antimonio pentavalenteen L. panamensis, en el cual se han identificado proteínas que están relacionadas con el mecanismode resistencia del parásito frente al medicamento, abriendo el camino para futuros estudios de estasproteínas como blancos terapéuticos.   doi: http://dx.doi.org/10.7705/biomedica.v32i3.392

  20. A pedagogia nas malhas de discursos legais

    OpenAIRE

    Jociane Rosa de Macedo Costa

    2002-01-01

    Esta dissertação se ocupa de discursos da legislação educacional brasileira e documentos correlatos de uma formação particular (em que aconteceram mudanças significativas na sociedade e na cultura)  a pedagogia. Seu objetivo é mostrar como esses discursos, ao prescreverem sobre a formação da pedagoga, produzem uma pedagogia que se constitui como prática de governo. Trata-se de uma pedagogia específica, fabricada nas malhas dos discursos legais e colocada a serviço da nação para a produção de...

  1. Las proteínas alergénicas: un novedoso blanco para el desarrollo de estudios en proteomica funcional

    Directory of Open Access Journals (Sweden)

    Elkin Navarro

    2008-01-01

    Full Text Available En la patogénesis de las enfermedades alérgicas están involucrados el ambiente, la carga genética y la inmunocompetencia del individuo. Continuamente nuestro sistema inmune está expuesto a numerosas proteínas, sin embargo, solo unas pocas inducen una respuesta inmune alérgica. El potencial intrínsico de una proteína alergénica para inducir sensibilización solo se manifiesta en individuos susceptibles, genéticamente condicionados a presentar respuestas atópicas. Muchas de estas proteínas alergénicas comparten alguna homología en su secuencia de aminoácidos. Estos alérgenos poseen un amplio rango de características moleculares, ninguna de las cuales es única solo para estas proteínas alergénicas. A pesar de esto, algunas de estas características son más comunes entre algunos alérgenos que con otras proteínas. Se ha demostrado que algunas proteínas con actividad enzimática inducen reacciones alérgicas y que esta propiedad biológica está asociada con su actividad catalítica. En la presente revisión se describen las principales características moleculares de las proteínas alergénicas, y se hace énfasis en la cistein proteasas de los ácaros intradomiciliarios, en razón a que ellas son un factor de riesgo en el desarrollo de una respuesta inmune alérgica en individuos susceptibles y se constituyen en factores desencadenantes de respuestas inflamatorias en la fisiopatología de las enfermedades alérgicas respiratorias.

  2. Functional Requirements Document for HALE UAS Operations in the NAS: Step 1. Version 3

    Science.gov (United States)

    2006-01-01

    The purpose of this Functional Requirements Document (FRD) is to compile the functional requirements needed to achieve the Access 5 Vision of "operating High Altitude, Long Endurance (HALE) Unmanned Aircraft Systems (UAS) routinely, safely, and reliably in the national airspace system (NAS)" for Step 1. These functional requirements could support the development of a minimum set of policies, procedures and standards by the Federal Aviation Administration (FAA) and various standards organizations. It is envisioned that this comprehensive body of work will enable the FAA to establish and approve regulations to govern safe operation of UAS in the NAS on a routine or daily "file and fly" basis. The approach used to derive the functional requirements found within this FRD was to decompose the operational requirements and objectives identified within the Access 5 Concept of Operations (CONOPS) into the functions needed to routinely and safely operate a HALE UAS in the NAS. As a result, four major functional areas evolved to enable routine and safe UAS operations for an on-demand basis in the NAS. These four major functions are: Aviate, Navigate, Communicate, and Avoid Hazards. All of the functional requirements within this document can be directly traceable to one of these four major functions. Some functions, however, are traceable to several, or even all, of these four major functions. These cross-cutting functional requirements support the "Command / Control: function as well as the "Manage Contingencies" function. The requirements associated to these high-level functions and all of their supporting low-level functions are addressed in subsequent sections of this document.

  3. The CAS-NAS forum for new leaders in space science

    Science.gov (United States)

    Smith, David H.

    The space science community is thoroughly international, with numerous nations now capable of launching scientific payloads into space either independently or in concert with others. As such, it is important for national space-science advisory groups to engage with like-minded groups in other spacefaring nations. The Space Studies Board of the US National Academy of Sciences' (NAS') National Research Council has provided scientific and technical advice to NASA for more than 50 years. Over this period, the Board has developed important multilateral and bilateral partnerships with space scientists around the world. The primary multilateral partner is COSPAR, for which the Board serves as the US national committee. The Board's primary bilateral relationship is with the European Science Foundation’s European Space Science Committee. Burgeoning Chinese space activities have resulted in several attempts in the past decade to open a dialogue between the Board and space scientists in China. On each occasion, the external political environment was not conducive to success. The most recent efforts to engage the Chinese space researchers began in 2011 and have proved particularly successful. Although NASA is currently prohibited from engaging in bilateral activities with China, the Board has established a fruitful dialogue with its counterpart in the Chinese Academy of Sciences (CAS). A joint NAS-CAS activity, the Forum for New Leaders in Space Science, has been established to provide opportunities for a highly select group of young space scientists from China and the United States to discuss their research activities in an intimate and collegial environment at meetings to be held in both nations. The presentation will describe the current state of US-China space relations, discuss the goals of the joint NAS-CAS undertaking and report on the activities at the May, 2014, Forum in Beijing and the planning for the November, 2014, Forum in Irvine, California.

  4. Flujo y concentración de proteínas en saliva total humana

    Directory of Open Access Journals (Sweden)

    BANDERAS-TARABAY JOSÉ ANTONIO

    1997-01-01

    Full Text Available Objetivo. Determinar los promedios de flujo salival y la concentración de proteínas totales en una población joven del Estado de México. Material y métodos. Se seleccionaron 120 sujetos a quienes se les colectó saliva total humana (STH no estimulada y estimulada, la cual se analizó por medio de gravimetría y espectrofotometría (LV/LU; se calcularon medidas de tendencia central y de dispersión; posteriormente, se correlacionaron estos datos con los índices CPOD y CPITN. Resultados. Los sujetos estudiados mostraron un promedio de flujo salival (ml/min ± DE en STH no estimulada de 0.397±.26, y en STH estimulada, de 0.973±.53. El promedio en la concentración de proteínas (mg/ml ± DE fue de 1.374±.45 en STH no estimulada y de 1.526±.44 en STH estimulada. Las mujeres presentaron un menor porcentaje de flujo salival y mayor concentración de proteínas. No se observaron correlaciones entre el flujo y la concentración de proteínas totales y el CPOD y CPITN; sin embargo, sí las hubo con otras variables. Conclusiones. Estos hallazgos podrían estar asociados con el grado de nutrición, las características genéticas y los niveles de salud bucal en nuestra población. El presente estudio representa la fase inicial de la creación de una base de datos en sialoquímica, cuya meta será identificar los parámetros que indiquen el riesgo de enfermedades sistémicas o bucodentales.

  5. Trabalhadores publicos nas administrações regionais e subprefeituras : uma categoria ameaçada

    OpenAIRE

    João Petrucio Medeiros da Silva

    2005-01-01

    Resumo: A política neoliberal e o processo de racionalização, decorrentes da política de reforma do Estado implementada a partir da década de 90, produziram fortes impactos na organização e nas relações de trabalho no setor público, sobretudo, na categoria dos servidores públicos municipais, em particular, os ajudantes de serviços gerais da Prefeitura Municipal de Campinas que prestam serviços nas Administrações Regionais e Subprefeituras. O processo de privatização, desregulamentação e f...

  6. 2016 ALCF Science Highlights

    Energy Technology Data Exchange (ETDEWEB)

    Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Wolf, Laura [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  7. 2015 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  8. 2014 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  9. The temperature dependence of atomic incorporation characteristics in growing GaInNAs films

    International Nuclear Information System (INIS)

    Li, Jingling; Gao, Fangliang; Wen, Lei; Zhou, Shizhong; Zhang, Shuguang; Li, Guoqiang

    2015-01-01

    We have systematically studied the temperature dependence of incorporation characteristics of nitrogen (N) and indium (In) in growing GaInNAs films. With the implementation of Monte-Carlo simulation, the low N adsorption energy (−0.10 eV) is demonstrated. To understand the atomic incorporation mechanism, temperature dependence of interactions between Group-III and V elements are subsequently discussed. We find that the In incorporation behaviors rather than that of N are more sensitive to the T g , which can be experimentally verified by exploring the compositional modulation and structural changes of the GaInNAs films by means of high-resolution X-ray diffraction, X-ray photoelectron spectroscopy, scanning electron microscope, and secondary ion mass spectroscopy

  10. Tutoria nas aulas de Educação Física inclusiva: uma revisão sistemática

    Directory of Open Access Journals (Sweden)

    Juliana Aparecida de Paula Schuller

    2016-09-01

    Full Text Available Para que as aulas de educação física sejam efetivamente inclusivas, uma estratégia que vem sendo adotada por profissionais da área é a tutoria. Objetivo: realizou-se uma revisão sistemática da literatura sobre os efeitos da tutoria na inclusão de alunos com deficiência nas aulas de educação física. Método: foi realizada uma busca de artigos nas bases Periódico da Capes e Web of Knowledge, com os termos: “Peer Tutoring e “Physical Education”. Foram incluídos artigos que tivessem a tutoria como estratégia usada na inclusão de alunos com deficiência nas aulas de educação física, selecionando quatro artigos. Resultados: verificou-se que a inclusão de alunos com deficiência nas aulas de educação física pode não ser bem-sucedida, caso não haja uma assistência complementar oferecida aos alunos com deficiência nas atividades desenvolvidas. Considerações finais: o uso da tutoria de pares por tutores de mesma idade das pessoas com deficiência mostra-se uma valiosa estratégia para a inclusão, pois esta favorece positivamente interações entre os alunos.

  11. Scientific and technical support of the operation and development of nuclear technologies by institutes of the NAS of Ukraine

    International Nuclear Information System (INIS)

    Neklyudov, Yi.M.; Volobujev, O.V.

    2011-01-01

    The significant role of NAS of Ukraine in the development and implementation of innovations in the field of nuclear and radiation technologies and the significant contribution of NAS of Ukraine in the solution of current problems of nuclear and radiation technologies is shown.

  12. A participação dos pais nas pesquisas sobre o bullying escolar

    Directory of Open Access Journals (Sweden)

    Juliane Callegaro Borsa

    Full Text Available O bullying é um problema comum na interação de pares e pode implicar em diferentes prejuízos ao longo do desenvolvimento das crianças vítimas e agressoras. Pesquisas recentes indicam alta frequência da prática do bullying nas escolas brasileiras, porém ainda são escassos os estudos que compreendam este fenômeno a partir de uma perspectiva multifatorial. O presente artigo tem por objetivo apresentar o conceito do bullying e mostrar a importância de considerar as variáveis do contexto familiar para sua compreensão. Destaca-se a necessidade de incluir os pais das crianças como participantes nas pesquisas empíricas sobre o bullying escolar e a importância da sua participação tanto na avaliação como na prevenção deste problema. Por fim, discute-se a inclusão dos pais nas estratégias de intervenção diante do bullying, com vista à redução dos fatores de risco presentes no ambiente familiar e dos seus prejuízos para o desenvolvimento socioemocional das crianças.

  13. Unmanned Aircraft Systems (UAS) Integration in the National Airspace System (NAS) Project KDP-C Review

    Science.gov (United States)

    Grindle, Laurie; Sakahara, Robert; Hackenberg, Davis; Johnson, William

    2017-01-01

    The topics discussed are the UAS-NAS project life-cycle and ARMD thrust flow down, as well as the UAS environments and how we operate in those environments. NASA's Armstrong Flight Research Center at Edwards, CA, is leading a project designed to help integrate unmanned air vehicles into the world around us. The Unmanned Aircraft Systems Integration in the National Airspace System project, or UAS in the NAS, will contribute capabilities designed to reduce technical barriers related to safety and operational challenges associated with enabling routine UAS access to the NAS. The project falls under the Integrated Systems Research Program office managed at NASA Headquarters by the agency's Aeronautics Research Mission Directorate. NASA's four aeronautics research centers - Armstrong, Ames Research Center, Langley Research Center, and Glenn Research Center - are part of the technology development project. With the use and diversity of unmanned aircraft growing rapidly, new uses for these vehicles are constantly being considered. Unmanned aircraft promise new ways of increasing efficiency, reducing costs, enhancing safety and saving lives 460265main_ED10-0132-16_full.jpg Unmanned aircraft systems such as NASA's Global Hawks (above) and Predator B named Ikhana (below), along with numerous other unmanned aircraft systems large and small, are the prime focus of the UAS in the NAS effort to integrate them into the national airspace. Credits: NASA Photos 710580main_ED07-0243-37_full.jpg The UAS in the NAS project envisions performance-based routine access to all segments of the national airspace for all unmanned aircraft system classes, once all safety-related and technical barriers are overcome. The project will provide critical data to such key stakeholders and customers as the Federal Aviation Administration and RTCA Special Committee 203 (formerly the Radio Technical Commission for Aeronautics) by conducting integrated, relevant system-level tests to adequately address

  14. Theoretical studies of optical gain tuning by hydrostatic pressure in GaInNAs/GaAs quantum wells

    International Nuclear Information System (INIS)

    Gladysiewicz, M.; Wartak, M. S.; Kudrawiec, R.

    2014-01-01

    In order to describe theoretically the tuning of the optical gain by hydrostatic pressure in GaInNAs/GaAs quantum wells (QWs), the optical gain calculations within kp approach were developed and applied for N-containing and N-free QWs. The electronic band structure and the optical gain for GaInNAs/GaAs QW were calculated within the 10-band kp model which takes into account the interaction of electron levels in the QW with the nitrogen resonant level in GaInNAs. It has been shown that this interaction increases with the hydrostatic pressure and as a result the optical gain for GaInNAs/GaAs QW decreases by about 40% and 80% for transverse electric and transverse magnetic modes, respectively, for the hydrostatic pressure change from 0 to 40 kilobars. Such an effect is not observed for N-free QWs where the dispersion of electron and hole energies remains unchanged with the hydrostatic pressure. This is due to the fact that the conduction and valence band potentials in GaInAs/GaAs QW scale linearly with the hydrostatic pressure

  15. Visualization at supercomputing centers: the tale of little big iron and the three skinny guys.

    Science.gov (United States)

    Bethel, E W; van Rosendale, J; Southard, D; Gaither, K; Childs, H; Brugger, E; Ahern, S

    2011-01-01

    Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources-the "Big Iron." Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be-that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?

  16. The use of net analyte signal (NAS) in near infrared spectroscopy pharmaceutical applications: interpretability and figures of merit.

    Science.gov (United States)

    Sarraguça, Mafalda Cruz; Lopes, João Almeida

    2009-05-29

    Near infrared spectroscopy (NIRS) has been extensively used as an analytical method for quality control of solid dosage forms for the pharmaceutical industry. Pharmaceutical formulations can be extremely complex, containing typically one or more active product ingredients (API) and various excipients, yielding very complex near infrared (NIR) spectra. The NIR spectra interpretability can be improved using the concept of net analyte signal (NAS). NAS is defined as the part of the spectrum unique to the analyte of interest. The objective of this work was to compare two different methods to estimate the API's NAS vector of different pharmaceutical formulations. The main difference between the methods is the knowledge of API free formulations NIR spectra. The comparison between the two methods was assessed in a qualitative and quantitative way. Results showed that both methods produced good results in terms of the similarity between the NAS vector and the pure API spectrum, as well as in the ability to predict the API concentration of unknown samples. Moreover, figures of merit such as sensitivity, selectivity, and limit of detection were estimated in a straightforward manner.

  17. Concepts of Integration for UAS Operations in the NAS

    Science.gov (United States)

    Consiglio, Maria C.; Chamberlain, James P.; Munoz, Cesar A.; Hoffler, Keith D.

    2012-01-01

    One of the major challenges facing the integration of Unmanned Aircraft Systems (UAS) in the National Airspace System (NAS) is the lack of an onboard pilot that can comply with the legal requirement identified in the US Code of Federal Regulations (CFR) that pilots see and avoid other aircraft. UAS will be expected to demonstrate the means to perform the function of see and avoid while preserving the safety level of the airspace and the efficiency of the air traffic system. This paper introduces a Sense and Avoid (SAA) concept for integration of UAS into the NAS that is currently being developed by the National Aeronautics and Space Administration (NASA) and identifies areas that require additional experimental evaluation to further inform various elements of the concept. The concept design rests on interoperability principles that take into account both the Air Traffic Control (ATC) environment as well as existing systems such as the Traffic Alert and Collision Avoidance System (TCAS). Specifically, the concept addresses the determination of well clear values that are large enough to avoid issuance of TCAS corrective Resolution Advisories, undue concern by pilots of proximate aircraft and issuance of controller traffic alerts. The concept also addresses appropriate declaration times for projected losses of well clear conditions and maneuvers to regain well clear separation.

  18. Entre gueixas e samurais: a imigração japonesa nas revistas ilustradas (1897-1945)

    OpenAIRE

    Marcia Yumi Takeuchi

    2009-01-01

    Esta pesquisa tem como objetivo analisar os debates em torno da imigração japonesa nas revistas ilustradas brasileiras publicadas nas cidades de São Paulo e Rio de Janeiro, e na documentação diplomática, tendo em vista a difusão do antiniponismo na sociedade brasileira entre os anos 1897 e 1945. Buscarei comprovar, a partir da análise das charges e caricaturas publicadas nessas revistas, sejam de cunho literário ou irreverente (cômico), que a iconografia exerceu papel fundamental na construçã...

  19. Neonatal Abstinence Syndrome (NAS): Transitioning Methadone Treated Infants From An Inpatient to an Outpatient Setting

    Science.gov (United States)

    Backes, Carl H.; Backes, Carl R.; Gardner, Debra; Nankervis, Craig A.; Giannone, Peter J.; Cordero, Leandro

    2013-01-01

    Background Each year in the US approximately 50,000 neonates receive inpatient pharmacotherapy for the treatment of neonatal abstinence syndrome (NAS). Objective To compare the safety and efficacy of a traditional inpatient only approach with a combined inpatient and outpatient methadone treatment program. Design/Methods Retrospective review (2007-9). Infants were born to mothers maintained on methadone or buprenorphine in an antenatal substance abuse program. All infants received methadone for NAS treatment as inpatient. Methadone weaning for the traditional group (75 pts) was inpatient while the combined group (46 pts) was outpatient. Results Infants in the traditional and combined groups were similar in demographics, obstetrical risk factors, birth weight, GA and the incidence of prematurity (34 & 31%). Hospital stay was shorter in the combined than in the traditional group (13 vs 25d; p < 0.01). Although the duration of treatment was longer for infants in the combined group (37 vs 21d, p<0.01), the cumulative methadone dose was similar (3.6 vs 3.1mg/kg, p 0.42). Follow-up: Information was available for 80% of infants in the traditional and 100% of infants in the combined group. All infants in the combined group were seen ≤ 72 hours from hospital discharge. Breast feeding was more common among infants in the combined group (24 vs. 8% p<0.05). Following discharge there were no differences between the two groups in hospital readmissions for NAS. Prematurity (<37w GA) was the only predictor for hospital readmission for NAS in both groups (p 0.02, OR 5). Average hospital cost for each infant in the combined group was $13,817 less than in the traditional group. Conclusions A combined inpatient and outpatient methadone treatment in the management of NAS decreases hospital stay and substantially reduces cost. Additional studies are needed to evaluate the potential long term benefits of the combined approach on infants and their families. PMID:21852772

  20. GESTÃO DO CONHECIMENTO NAS ORGANIZAÇÕES OU DO DESCONHECIMENTO DA REALIDADE ORGANIZACIONAL?

    Directory of Open Access Journals (Sweden)

    Fladimir F. dos Santos

    2005-12-01

    Full Text Available Este artigo discute a validade dos propósitos da gestão do conhecimento como ferramenta de intervenção organizacional, descrevendo alguns paradoxos existentes entre a sua teoria e prática nas organizações. Argumenta-se que seus propósitos originais de criação, difusão e incorporação de um novo conhecimento na organização estão dando lugar a uma abordagem que não condiz com a realidade das organizações. Faz-se também uma leitura da evolução das teorias administrativas para elucidar como esse novo paradigma da administração está sendo abordado nas empresas. Propõe-se que se trata de uma reificação da máxima taylorista, cujos propósitos originais estão se convertendo em mais um instrumento de manipulação humana nas empresas. Ao final, em substituição ao modelo gerencial vigente, é proposto um processo de gestão baseado em valores.

  1. Ensinar e aprender geografia com/nas redes sociais

    Directory of Open Access Journals (Sweden)

    Élida Pasini Tonetto

    2015-01-01

    Full Text Available Este estudo trata de refletir sobre as potencialidades/operacionalidades das práticas pedagógicas da Geografia na apropriação das redes sociais online. Para isso, analisamos possíveis potencialidades oferecidas pelas redes sociais online para a Geografia e como podem ser operacionalizadas nas práticas pedagógicas com as redes sociais online seu ensino e, também, pensar como elas podem contribuir para ensinar e aprender com mais significância Geografia. Os fios teóricos da pesquisa estão tramados no entendimento de aprendizagem online para emaranhar os conceitos de espaço e ciberespaço, transitando por dois locais fundamentais: o da escola e o das redes. A abordagem metodológica é construída nas trilhas das pesquisas pós-críticas em educação, onde o Facebook é o lócus para analisar as novas formas de comunicar que subjetivam os sujeitos e engendram novos formatos de ensinagem. Os resultados apontam diferentes potencialidades e operacionalidades das redes sociais online, mas que não representam apenas o uso da técnica em sala de aula, mas sim como parte da agenda de busca pela construção de processos de aprendizagens significativos em Geografia, através das redes sociais, que representam uma forma contemporânea de comunicar/interagir presente no cotidiano dos alunos.

  2. Evaluation of existing and proposed computer architectures for future ground-based systems

    Science.gov (United States)

    Schulbach, C.

    1985-01-01

    Parallel processing architectures and techniques used in current supercomputers are described and projections are made of future advances. Presently, the von Neumann sequential processing pattern has been accelerated by having separate I/O processors, interleaved memories, wide memories, independent functional units and pipelining. Recent supercomputers have featured single-input, multiple data stream architectures, which have different processors for performing various operations (vector or pipeline processors). Multiple input, multiple data stream machines have also been developed. Data flow techniques, wherein program instructions are activated only when data are available, are expected to play a large role in future supercomputers, along with increased parallel processor arrays. The enhanced operational speeds are essential for adequately treating data from future spacecraft remote sensing instruments such as the Thematic Mapper.

  3. Os direitos fundamentais da personalidade como instrumento para atingir a dignidade da pessoa humana nas relações de trabalho

    OpenAIRE

    Sanvito, Paulo Celso

    2011-01-01

    Esse trabalho aborda os direitos fundamentais, sociais e da personalidade, e sua atuação nas relações privadas, com maior enfoque nas laborais, com o intuito de verificar a possibilidade de sua incidência nas relações trabalhistas, superando um direito do trabalho exclusivamente econômico-financeiro, visando a prevalência da dignidade da pessoa humana, exatamente através dos direitos fundamentais. Através de uma abordagem dialética, analisando por meio de raciocínios, como o indutivo, dedu...

  4. The design and implementation of cost-effective algorithms for direct solution of banded linear systems on the vector processor system 32 supercomputer

    Science.gov (United States)

    Samba, A. S.

    1985-01-01

    The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.

  5. Sandia`s network for Supercomputing `94: Linking the Los Alamos, Lawrence Livermore, and Sandia National Laboratories using switched multimegabit data service

    Energy Technology Data Exchange (ETDEWEB)

    Vahle, M.O.; Gossage, S.A.; Brenkosh, J.P. [Sandia National Labs., Albuquerque, NM (United States). Advanced Networking Integration Dept.

    1995-01-01

    Supercomputing `94, a high-performance computing and communications conference, was held November 14th through 18th, 1994 in Washington DC. For the past four years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1994 conference, Sandia built a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second linking its private SMDS network between its facilities in Albuquerque, New Mexico and Livermore, California to the convention center in Washington, D.C. For the show, the network was also extended from Sandia, New Mexico to Los Alamos National Laboratory and from Sandia, California to Lawrence Livermore National Laboratory. This paper documents and describes this network and how it was used at the conference.

  6. Propaganda negativa nas eleições presidenciais brasileiras

    OpenAIRE

    Borba, Felipe

    2015-01-01

    ResumoEste artigo tem como propósito investigar a propaganda negativa nas eleições presidenciais brasileiras. Tema de extrema relevância tendo em vista que a literatura recente vem sugerindo que o tom das campanhas tem consequências importantes para a decisão do voto, a participação política e o nível de informação dos eleitores. Entretanto, a maior parte desses estudos foi realizada para entender a realidade política dos Estados Unidos. No Brasil, apesar do crescente interesse pelos efeitos ...

  7. Pensamento estratégico nas organizações

    OpenAIRE

    Kich, Juliane Ines Di Francesco

    2015-01-01

    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Sócio-Econômico, Programa de Pós-Graduação em Administração, Florianópolis, 2015. A presente tese propõe um modelo que subsidia o desenvolvimento do pensamento estratégico nas organizações. Tem como objetivo maior responder a seguinte pergunta de pesquisa: Quais são os atributos que formam o conceito do pensamento estratégico, e quais são os elementos organizacionais que desenvolvem tais atributos nos membros de uma organiza...

  8. High-temperature operation of self-assembled GaInNAs/GaAsN quantum-dot lasers grown by solid-source molecular-beam epitaxy

    International Nuclear Information System (INIS)

    Liu, C.Y.; Yoon, S.F.; Sun, Z.Z.; Yew, K.C.

    2006-01-01

    Self-assembled GaInNAs/GaAsN single layer quantum-dot (QD) lasers grown using solid-source molecular-beam epitaxy have been fabricated and characterized. Temperature-dependent measurements have been carried out on the GaInNAs QD lasers. The lowest obtained threshold current density in this work is ∼1.05 kA/cm 2 from a GaInNAs QD laser (50x1700 μm 2 ) at 10 deg. C. High-temperature operation up to 65 deg. C was also demonstrated from an unbonded GaInNAs QD laser (50x1060 μm 2 ), with high characteristic temperature of 79.4 K in the temperature range of 10-60 deg. C

  9. Unmanned Aircraft Systems (UAS) Integration in the National Airspace System (NAS) Project - Systems Integration and Operationalization (SIO) Demonstration

    Science.gov (United States)

    Swieringa, Kurt

    2018-01-01

    The UAS-NAS Project hosted a Systems Integration Operationalization (SIO) Industry Day for the SIO Request for Information (RFI) on November 30, 2017 in San Diego, California. This presentation is being presented to the same group as a follow up regarding the progress that the UAS-NAS project has made on the SIO RFI. The presentation will be virtual with a teleconference

  10. Proteínas del huésped incorporadas en el Virus de la Inmunodeficiencia Humana tipo 1 (VIH-1

    Directory of Open Access Journals (Sweden)

    Dayane Canedo de León

    2008-01-01

    Full Text Available El Virus de la Inmunodeficiencia Humana (VIH, como la mayoría de los virus envueltos, toma esta estructura cuando abandona la célula infectada. El virus adquiere durante este proceso, junto con fragmentos de la membrana de la célula huésped, proteínas derivadas de la membrana celular como parte integral de la envoltura madura. Estos componentes de la envoltura viral derivados del huésped pueden ejercer algunos efectos en el ciclo de vida del virus, en la interacción virus-célula, especialmente en la respuesta del huésped a sus propias proteínas incorporadas por el virus y, finalmente, en la patogénesis de la enfermedad inducida por el virus. El rol de estas proteínas ha recibido cada día más atención, específicamente en la importancia que puedan tener en el proceso infeccioso viral y en le desarrollo del Síndrome de la Inmunodeficiencia Adquirida (SIDA. El objetivo de este artículo es hacer una revisión de las proteínas del huésped que son incorporadas por el VIH, haciendo énfasis en el rol potencial de estas proteínas en la patogénesis del SIDA.

  11. Regulación de la degradación intracelular de proteínas por glucosa.

    OpenAIRE

    MORUNO MANCHON, JOSE FELIX

    2014-01-01

    La supervivencia celular frente a los cambios ambientales requiere el mantenimiento de un equilibrio dinámico entre la síntesis y la degradación de proteínas. La degradación de proteínas, además de regular diferentes procesos celulares, tiene como función principal la eliminación de productos que no son útiles para la célula en determinadas situaciones o cuya acumulación puede ser tóxica. Los productos de esta degradación, es decir los aminoácidos, son reutilizados para la síntesis de nuevas ...

  12. The Reflection of Quantum Aesthetics in Algis Mickūnas Cosmic Philosophy

    Directory of Open Access Journals (Sweden)

    Auridas Gajauskas

    2011-04-01

    Full Text Available Quantum Aesthetics phenomenon was formed in Spain, at the end of the twentieth centure. The paper analyzes this movement in the context of Algis Mickūnas phenomenological cosmic philosophy. Movement initiator is a Spanish novelist Gregorio Morales. The study is divided into two parts: the first part presents aesthetic principles of the quantum, relationship between new aesthetics and theories of quantum mechanics, physics and other sciences. The paper also examines the similarities of quantum aesthetics and New Age movements. The second part presents cosmic - phenomenological reflection of quantum theory of beauty. Mickūnas philosophical position combines theory of "eternal recurrence", "the bodily nature of consciousness", "the cosmic dance", theory of "dynamic fields" and quantum approach to aesthetics and the Universe. Summa Summarum he writes that "the conception of quantum aesthetics is involved in the composition of the rhythmic, cyclical and mood dimensioned and tensed world". 

  13. Toracotomia minimamente invasiva nas intervenções cirúrgicas valvares

    Directory of Open Access Journals (Sweden)

    PEREIRA Marcelo Balestro

    1998-01-01

    Full Text Available Introdução: é tema atual a realização de procedimentos cirúrgicos por minitoracotomias que, inicialmente utilizadas para operações de revascularização do miocárdio, têm sido também propostas como acesso às operações valvares. O objetivo deste trabalho é analisar resultados da minitoracotomia em relação à técnica tradicional nas intervenções valvares, em estudo prospectivo. Casuística e métodos: entre novembro de 1996 e fevereiro de 1998, dois grupos, 8 pacientes operados por minitoracotomia (Grupo 1 e 8 controles (Grupo 2 equiparáveis nas variáveis sexo, idade, peso/altura, classe funcional pré-operatória, doença de base e operação proposta, foram submetidos a reparo ou troca valvar aórtica ou mitral. Os pacientes do Grupo 1 foram operados através de incisão paraesternal direita de até 8 cm, com circulação extracorpórea (CEC estabelecida através de canulação arterial e venosa femorais e os do Grupo 2 (controles por esternotomia mediana. Ambos os grupos foram acompanhados até a alta hospitalar. Resultados: Os parâmetros avaliados no trans-operatório e pós-operatório, bem como a análise estatística constam nas Tabelas 1 e 2. Não ocorreram óbitos imediatos. Duas complicações foram registradas: um infarto per-operatório e um acidente vascular cerebral no Grupo 2. Conclusão: os resultados parciais permitem inferir que a abordagem através de pequenas toracotomias é factível sem aumento na morbimortalidade, do tempo cirúrgico ou da estadia hospitalar. Possíveis vantagens objetivas de um método em relação a outro, exceto o aspecto estético, não estão evidentes até esta etapa do estudo.

  14. Challenges in safety regulation of R and D activities for advanced technologies in DAE units

    International Nuclear Information System (INIS)

    Shukla, Dinesh Kumar

    2016-01-01

    DAE is engaged in intensive research and developmental activities, especially for advanced technologies such as accelerators, lasers, supercomputers, advanced materials and instrumentation. The starting point of an R and D project might be a hypothesis to be tested, problem to be solved, or the performance of an item to be improved, and there may be many possible solutions and technologies that could be used. R and D is quite different from designing, constructing, operating a plant. In these, precisely described result can be defined from the beginning and can be described in design specifications, process descriptions and procedures. However, while established procedures may be available to begin an R and D project, deviation from these procedures may occur often as a legitimate component of the conduct of R and D. Nevertheless, the R and D activities have to be performed in a manner which provides assurance that safety requirements are adequately addressed. Hence, the regulatory approach for enforcing the safety regulation in such facilities is also not as rigid as those for an operating industry. This paper is aimed to discuss some of the key challenges in regulating such R and D activities and also attempts to suggest a way forward. (author)

  15. UAS Integration in the NAS Project: Flight Test 3 Data Analysis of JADEM-Autoresolver Detect and Avoid System

    Science.gov (United States)

    Gong, Chester; Wu, Minghong G.; Santiago, Confesor

    2016-01-01

    The Unmanned Aircraft Systems Integration in the National Airspace System project, or UAS Integration in the NAS, aims to reduce technical barriers related to safety and operational challenges associated with enabling routine UAS access to the NAS. The UAS Integration in the NAS Project conducted a flight test activity, referred to as Flight Test 3 (FT3), involving several Detect-and-Avoid (DAA) research prototype systems between June 15, 2015 and August 12, 2015 at the Armstrong Flight Research Center (AFRC). This report documents the flight testing and analysis results for the NASA Ames-developed JADEM-Autoresolver DAA system, referred to as 'Autoresolver' herein. Four flight test days (June 17, 18, 22, and July 22) were dedicated to Autoresolver testing. The objectives of this test were as follows: 1. Validate CPA prediction accuracy and detect-and-avoid (DAA, formerly known as self-separation) alerting logic in realistic flight conditions. 2. Validate DAA trajectory model including maneuvers. 3. Evaluate TCAS/DAA interoperability. 4. Inform final Minimum Operating Performance Standards (MOPS). Flight test scenarios were designed to collect data to directly address the objectives 1-3. Objective 4, inform final MOPS, was a general objective applicable to the UAS in the NAS project as a whole, of which flight test is a subset. This report presents analysis results completed in support of the UAS in the NAS project FT3 data review conducted on October 20, 2015. Due to time constraints and, to a lesser extent, TCAS data collection issues, objective 3 was not evaluated in this analysis.

  16. Estado Confusional Agudo nas Unidades de Cuidados Intensivos

    OpenAIRE

    Santos, L; Alcântara, J

    1996-01-01

    As alterações do comportamento frequentemente observadas em doentes internados nas unidades de cuidados intensivos (UCI), podem ser adequadamente designadas, na maioria das vezes, por estado confusional agudo, o qual se caracteriza por: flutuação do estado de vigília, distúrbio do ciclo vigília-sono, défice de atenção e concentração, desorganização do pensamento, manifestado entre outras formas por discurso incoerente, distúrbios da percepção sob a forma de ilusões e/ou alucinações, desorient...

  17. Refração atmosferica nas medidas Doppler

    OpenAIRE

    Oliveira, Leonardo Castro de

    1990-01-01

    Orientador: Jose Bittencourt de Andrade Dissertação (mestrado) - Universidade Federal do Parana. Setor de Tecnologia Resumo: Esta dissertação tem por objetivo realizar investigações referentes à refração atmosférica nas medidas Doppler. São considerados quatro modelos para correção troposférica, e o modelo de duas frequências para a correção da refração ionosférica . São também testados de diferentes fontes de dados meteorológicos. Todos os testes são feitos utilizando-se o programa GEO...

  18. Predicción computacional de estructura terciaria de las proteínas humanas Hsp27, αB-cristalina y HspB8

    Directory of Open Access Journals (Sweden)

    Homero Saenz-Suárez

    2011-03-01

    Full Text Available Objetivo. Realizar predicciones computacionales de estructura de las proteínas humanas Hsp27, αB cristalina y HspB8. Materiales y métodos. La predicción de la estructura secundaria se obtuvo mediante un consenso de los programas de predicción secundaria GOR 4,nnPred,Sspro, APSSP2, JPredict, Porter, Prof, SOPMA, HNN y Psi-Pred. Los modelos de estructura terciaria se elaboraron a partir de fragmentos homólogos de proteínas con estructura terciaria conocida que fueron obtenidos por múltiples alineamientos. Empleando la secuencia primaria se obtuvieron perfiles de antigenicidad de las proteínas nativas y fueron analizados los perfiles de hidrofobicidad, polaridad, flexibilidad, accesibilidad tanto de las proteínas nativas como de las mutadas. Resultados. Las predicciones de estructura secundaria y terciariade las proteínas estudiadas muestran que en los tres casos, más del 65% son regiones en coil, 20-25% en hoja plegada y menos del 10% en alfa hélice. Los análisis de estructura primaria muestran que al menos uno de los perfiles estudiados, en cada mutación está alterado. Conclusiones. Los análisis comparativos de estructura sugieren que las mutaciones afectan la solubilidad de las proteínas mutadas y con ello su función como chaperonas moleculares

  19. NASA UAS Integration into the NAS Project: Human Systems Integration

    Science.gov (United States)

    Shively, Jay

    2016-01-01

    This presentation provides an overview of the work the Human Systems Integration (HSI) sub-project has done on detect and avoid (DAA) displays while working on the UAS (Unmanned Aircraft System) Integration into the NAS project. The most recent simulation on DAA interoperability with Traffic Collision Avoidance System (TCAS) is discussed in the most detail. The relationship of the work to the larger UAS community and next steps are also detailed.

  20. Combining density functional theory calculations, supercomputing, and data-driven methods to design new materials (Conference Presentation)

    Science.gov (United States)

    Jain, Anubhav

    2017-04-01

    Density functional theory (DFT) simulations solve for the electronic structure of materials starting from the Schrödinger equation. Many case studies have now demonstrated that researchers can often use DFT to design new compounds in the computer (e.g., for batteries, catalysts, and hydrogen storage) before synthesis and characterization in the lab. In this talk, I will focus on how DFT calculations can be executed on large supercomputing resources in order to generate very large data sets on new materials for functional applications. First, I will briefly describe the Materials Project, an effort at LBNL that has virtually characterized over 60,000 materials using DFT and has shared the results with over 17,000 registered users. Next, I will talk about how such data can help discover new materials, describing how preliminary computational screening led to the identification and confirmation of a new family of bulk AMX2 thermoelectric compounds with measured zT reaching 0.8. I will outline future plans for how such data-driven methods can be used to better understand the factors that control thermoelectric behavior, e.g., for the rational design of electronic band structures, in ways that are different from conventional approaches.

  1. A Parallel Supercomputer Implementation of a Biological Inspired Neural Network and its use for Pattern Recognition

    International Nuclear Information System (INIS)

    De Ladurantaye, Vincent; Lavoie, Jean; Bergeron, Jocelyn; Parenteau, Maxime; Lu Huizhong; Pichevar, Ramin; Rouat, Jean

    2012-01-01

    A parallel implementation of a large spiking neural network is proposed and evaluated. The neural network implements the binding by synchrony process using the Oscillatory Dynamic Link Matcher (ODLM). Scalability, speed and performance are compared for 2 implementations: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) running on clusters of multicore supercomputers and NVIDIA graphical processing units respectively. A global spiking list that represents at each instant the state of the neural network is described. This list indexes each neuron that fires during the current simulation time so that the influence of their spikes are simultaneously processed on all computing units. Our implementation shows a good scalability for very large networks. A complex and large spiking neural network has been implemented in parallel with success, thus paving the road towards real-life applications based on networks of spiking neurons. MPI offers a better scalability than CUDA, while the CUDA implementation on a GeForce GTX 285 gives the best cost to performance ratio. When running the neural network on the GTX 285, the processing speed is comparable to the MPI implementation on RQCHP's Mammouth parallel with 64 notes (128 cores).

  2. Valores humanos nas organiza??es: rela??o com a s?ndrome de Burnout e o engajamento laboral

    OpenAIRE

    Coelho, Gabriel Lins de Holanda

    2014-01-01

    Por muitas d?cadas as pesquisas nas organiza??es focaram nos aspectos negativos ocasionados nos trabalhadores, tendo a s?ndrome de burnout como principal expoente. Nos ?ltimos anos, com a expans?o da Psicologia Positiva, o interesse pelos aspectos positivos aumentou e resultou no estudo do engajamento laboral, considerado a ant?tese do burnout e essencial para a maximiza??o do material humano nas organiza??es. Contudo, para a implementa??o de um ambiente que seja prop?cio estimular o engajame...

  3. Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers

    KAUST Repository

    Wu, Xingfu; Duan, Benchun; Taylor, Valerie

    2011-01-01

    , such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular

  4. Empolgação com Copa freia protestos nas redes sociais no Brasil e no mundo

    OpenAIRE

    Zarko, Raphael

    2014-01-01

    Depois da Copa das Confederações de 2013 serem marcadas pelas manifestações contrárias à realização e gastos excessivos para o Mundial do Brasil – tanto nas ruas quanto na internet -, a impressão de que houve uma queda nos protestos da Copa do Mundo do Brasil se confirma com uma ampla pesquisa nas redes sociais. Em monitoramento de mais de 11 milhões de mensagens de Twitter no Brasil e no mundo, o número de menções a protestos é de apenas 17 mil – percentualmente, significa dizer que apenas 0...

  5. Estudos estruturais de proteínas de Leptospira interrogans sorovar Copenhageni potencialmente localizadas no envelope celular

    OpenAIRE

    Priscila Oliveira de Giuseppe

    2010-01-01

    Resumo: Leptospira interrogans é uma bactéria espiroqueta que causa a leptospirose, uma zoonose de distribuição mundial que afeta mais de 500.000 pessoas anualmente. Pouco se sabe sobre a biologia de leptospiras, o que dificulta a elaboração de novas estratégias de prevenção e de tratamento contra a doença. Cerca de 60 % dos genes de L. interrogans codifica proteínas que não apresentam similaridade de sequência significativa com proteínas de função conhecida. Como a estrutura cristalográfica ...

  6. Computer Simulation Performed for Columbia Project Cooling System

    Science.gov (United States)

    Ahmad, Jasim

    2005-01-01

    This demo shows a high-fidelity simulation of the air flow in the main computer room housing the Columbia (10,024 intel titanium processors) system. The simulation asseses the performance of the cooling system and identified deficiencies, and recommended modifications to eliminate them. It used two in house software packages on NAS supercomputers: Chimera Grid tools to generate a geometric model of the computer room, OVERFLOW-2 code for fluid and thermal simulation. This state-of-the-art technology can be easily extended to provide a general capability for air flow analyses on any modern computer room. Columbia_CFD_black.tiff

  7. O Jornalismo nas Rádios Comunitárias

    OpenAIRE

    Rosembach, Cilto José

    2006-01-01

    O presente estudo analisa o jornalismo nas rádios comunitárias a partir do paradigma da comunicação popular, alternativa e da contextualização histórica das rádios comunitárias no Brasil. A programação jornalística de duas rádios comunitárias no Estado de São Paulo é analisada a partir do referencial teórico que elucida a comunicação popular e prioriza os conceitos de jornalismo popular. São analisadas a Rádio Cantareira FM 107,5, de Vila Isabel, distrito de Brasilândia, São...

  8. Vector para la coexpresión de varias proteínas heterólogas en cantidades equimolares

    OpenAIRE

    Daròs Arnau, José Antonio; Bedoya, Leonor; Martínez, Fernando

    2010-01-01

    [ES] La invención se refiere a un vector de expresión basado en la secuencia nucleotídica del genoma de un Potyvirus, preferiblemente del virus del grabado del tabaco, que alberga una secuencia nucleotídica que codifica para al menos una proteína heteróloga, preferiblemente para dos y más preferiblemente para tres proteínas heterologas. Las proteínas heterologas se expresan, en la célula transfectada con este vector, como parte de la poliproteína viral y se encuentran flanqueadas por...

  9. High-Power 1180-nm GaInNAs DBR Laser Diodes

    DEFF Research Database (Denmark)

    Aho, Antti T.; Viheriala, Jukka; Korpijarvi, Ville-Markus

    2017-01-01

    We report high-power 1180-nm GaInNAs distributed Bragg reflector laser diodes with and without a tapered amplifying section. The untapered and tapered components reached room temperature output powers of 655 mW and 4.04 W, respectively. The diodes exhibited narrow linewidth emission with side...... and better carrier confinement compared with traditional GaInAs quantum wells. The development opens new opportunities for the power scaling of frequency-doubled lasers with emission at yellow-orange wavelengths....

  10. Studies of quantum levels in GaInNAs single quantum wells

    International Nuclear Information System (INIS)

    Shirakata, Sho; Kondow, Masahiko; Kitatani, Takeshi

    2006-01-01

    Spectroscopic studies have been carried out on the quantum levels in GaInNAs/GaAs single quantum wells (SQWs). Photoluminescence (PL), PL excitation (PLE), photoreflectance (PR), and high-density-excited PL (HDE-PL) were measured on high quality GaInNAs SQWs, Ga 0.65 In 0.35 N 0.01 As 0.99 /GaAs (well thickness: l z =10 nm) and Ga 0.65 In 0.35 N 0.005 As 0.995 /GaAs (l z =3∝10 nm), grown by molecular-beam epitaxy. For Ga 0.65 In 0.35 N 0.01 As 0.99 /GaAs (l z =10 nm), PL at 8 K exhibited a peak at 1.07 eV due to the exciton-related transition between quantum levels of ground states (e1-hh1). Both PR and PLE exhibited three transitions (1.17, 1.20 and 1.32 eV), and the former two transitions were assigned to as either of e1-lh1 and e2-hh2 transitions, while the transition at 1.32 eV was assigned to as the e2-lh2 transition. For HDE-PL, a new PL peak was observed at about 1.2 eV, and it was assigned to the unresolved e1-lh1 and e2-hh2 transitions. Similar optical measurements have been done on the Ga 0.65 In 0.35 N 0.005 As 0.995 /GaAs with various l z (3∝10 nm). Dependence of optical spectra and energies of quantum levels on l z have been studied. It has been found that HDE-PL in combination with PLE is a good tool for the study of the quantum level of GaInNAs SQW. (copyright 2006 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (Abstract Copyright [2006], Wiley Periodicals, Inc.)

  11. Las proteínas del plasma seminal incrementan la viabilidad espermática post-descongelación del semen de toros Sanmartinero

    OpenAIRE

    Fabián Rueda A.; Tatiana Garcés P.; Rocío Herrera L.; Luis Arbeláez R.; Miguel Peña J.; Henry Velásquez P.; Aureliano Hernández V.; Jaime Cardozo C.

    2013-01-01

    Objetivo. El objetivo de este trabajo fue evaluar el efecto de la adición de proteínas del plasma seminal sobre el porcentaje de espermatozoides bovinos viables post-descongelación. Materiales y métodos. Los espermatozoides se congelaron usando dos medios (citrato-fructosa-yema y Bioxcell®) y la obtención de proteínas de plasma seminal de bajo peso molecular se realizó por medio de cromatografía líquida de baja presión. Las proteínas de interés eluyeron en las fracciones 21-25 y se sometieron...

  12. Las proteínas del plasma seminal incrementan la viabilidad espermática post-descongelación del semen de toros Sanmartinero

    OpenAIRE

    Rueda A, Fabián; Garcés P, Tatiana; Herrera L, Rocío; Arbeláez R, Luis; Peña J, Miguel; Velásquez P, Henry; Hernández V, Aureliano; Hernández V, Aureliano; Cardozo C, Jaime

    2013-01-01

    RESUMENObjetivo. El objetivo de este trabajo fue evaluar el efecto de la adición de proteínas del plasma seminal sobre el porcentaje de espermatozoides bovinos viables post-descongelación. Materiales y métodos. Los espermatozoides se congelaron usando dos medios (citrato-fructosa-yema y Bioxcell®) y la obtención de proteínas de plasma seminal de bajo peso molecular se realizó por medio de cromatografía líquida de baja presión. Las proteínas de interés eluyeron en las fracciones 21-25 y se som...

  13. PARTICIPAÇÕES FEMININAS NA VIDA PÚBLICA E NAS ATIVIDADES DA DOMUS ROMANA: TESTEMUNHOS EPIGRÁFICOS ENTRE SURRENTUM, STABIAE E NUCERIA

    Directory of Open Access Journals (Sweden)

    Maricí Martins Magalhães

    2014-10-01

    Full Text Available O papel significativo da mulher casada de grau equestre na sociedade Romana, bem como a presença feminina de famílias senatoriais e imperiais nas municipalidades de Nuceria, Stabiae e Surrentum: suas honras e influência política, sua interação com a administração local e seus próprios negócios. O engajamento de tais mulheres nas atividades e na administração familiar, e nas atividades domésticas de seus próprios escravos e escravas. Alguns testemunhos epigráficos e arqueológicos.

  14. Análisis del perfil de expresión de proteínas del organismo halotolerante Tistlia consotensis bajo diferentes condiciones de salinidad

    Directory of Open Access Journals (Sweden)

    Carolina Rubiano-Labrador

    2016-12-01

    Full Text Available En su hábitat natural, Tistlia consotensis, una bacteria halotolerante aislada de un manantial salino colombiano, está expuesta a variaciones continuas en la osmolaridad necesitando por lo tanto una respuesta efectiva frente a cambios en la salinidad. Para determinar el perfil de expresión de proteínas de T. consotensis bajo diferentes salinidades (0, 5 y 40 g·l-1 NaCl se utilizó la técnica de electroforesis bidimensional (2-DE para analizar el cambio de expresión de sus proteínas en respuesta al estrés osmótico. De acuerdo a los resultados obtenidos, se identificaron 56 manchas proteicas con cambios significativos de expresión. En ausencia de NaCl, 22 manchas se sobre-expresaron y 5 se sub-expresaron mientras que a 40 g·l-1 de NaCl seis manchas se sub-expresaron. El análisis de los perfiles de expresión de proteínas de T. consotensis permitió determinar que esta bacteria halotolerante responde a los cambios de salinidad del medio expresando un perfil diferencial de proteínas, donde a mayor salinidad disminuye el número e intensidad de proteínas expresadas. Además, se detectaron proteínas que probablemente confieren a T. consotensis la capacidad de adaptación a condiciones cambiantes de salinidad.

  15. 48 CFR 852.236-82 - Payments under fixed-price construction contracts (without NAS).

    Science.gov (United States)

    2010-10-01

    ... manner; or (iv) Failure to comply in good faith with approved subcontracting plans, certifications, or... under other provisions of the contract or in accordance with the general law and regulations regarding... construction contracts (without NAS). 852.236-82 Section 852.236-82 Federal Acquisition Regulations System...

  16. NAS Decadal Review Town Hall

    Science.gov (United States)

    The National Academies of Sciences, Engineering and Medicine is seeking community input for a study on the future of materials research (MR). Frontiers of Materials Research: A Decadal Survey will look at defining the frontiers of materials research ranging from traditional materials science and engineering to condensed matter physics. Please join members of the study committee for a town hall to discuss future directions for materials research in the United States in the context of worldwide efforts. In particular, input on the following topics will be of great value: progress, achievements, and principal changes in the R&D landscape over the past decade; identification of key MR areas that have major scientific gaps or offer promising investment opportunities from 2020-2030; and the challenges that MR may face over the next decade and how those challenges might be addressed. This study was requested by the Department of Energy and the National Science Foundation. The National Academies will issue a report in 2018 that will offer guidance to federal agencies that support materials research, science policymakers, and researchers in materials research and other adjoining fields. Learn more about the study at http://nas.edu/materials.

  17. JK E A REINVENÇÃO DO COTIDIANO NAS NARRATIVAS JORNALÍSTICAS BRASILEIRAS

    Directory of Open Access Journals (Sweden)

    Renato de Almeida Vieira e Silva

    2014-03-01

    Full Text Available Qual a importância dos discursos para a construção da imagem presidencial nas narrativas jornalísticas, num determinado contexto histórico, sendo essa construção de sentidos capaz inclusive de ressignificar o cotidiano de um país , ativar o imaginário e transcender àquele período de governo tornando-se mitológica até para os presidentes que vieram em sucessão? Esse trabalho se propõe a analisar essas hipóteses de produção simbólica e de sentidos encontradas nos discursos do presidente JK, publicados em algumas das principais revistas brasileiras entre 1956 e 1960, representadas por O Cruzeiro e Manchete, contemporizando com algumas citações veiculadas nas revistas Época, Veja e Isto É, em fases mais recentes. Para esse intento serão utilizados conceitos de autores como Bourdieu, Barthes, Orlandi, Heller, Motta, Eliade e Girardet.

  18. Como tem se dado a atuação do assistente social nas empresas privadas?

    Directory of Open Access Journals (Sweden)

    Stephania Lani de Lacerda Reis Gavioli de Abreu

    2016-05-01

    Full Text Available O presente artigo tem como objetivo apresentar como vem se dando a atuação do assistente social nas empresas privadas. O estudo foi realizado por meio de uma pesquisa qualitativa, mediante entrevistas com profissionais da área, através de roteiro semi-estruturado, em seis empresas privadas. Como principais resultados observou-se que a atuação do assistente social nas empresas privadas é marcada por diversos antagonismos, no entanto, acredita-se ser possível direcionar seu trabalho para os interesses dos trabalhadores em paralelo aos interesses de lucratividade do capital, realizando-os por meio de estratégias articuladas ao projeto ético-político do Serviço Social.

  19. Serum ferritin is an independent predictor of histologic severity and advanced fibrosis in patients with nonalcoholic fatty liver disease.

    Science.gov (United States)

    Kowdley, Kris V; Belt, Patricia; Wilson, Laura A; Yeh, Matthew M; Neuschwander-Tetri, Brent A; Chalasani, Naga; Sanyal, Arun J; Nelson, James E

    2012-01-01

    Serum ferritin (SF) levels are commonly elevated in patients with nonalcoholic fatty liver disease (NAFLD) because of systemic inflammation, increased iron stores, or both. The aim of this study was to examine the relationship between elevated SF and NAFLD severity. Demographic, clinical, histologic, laboratory, and anthropometric data were analyzed in 628 adult patients with NAFLD (age, ≥ 18 years) with biopsy-proven NAFLD and an SF measurement within 6 months of their liver biopsy. A threshold SF >1.5 × upper limit of normal (ULN) (i.e., >300 ng/mL in women and >450 ng/mL in men) was significantly associated with male sex, elevated serum alanine aminotransferase, aspartate aminotransferase, iron, transferrin-iron saturation, iron stain grade, and decreased platelets (P 1.5 × ULN, including steatosis, fibrosis, hepatocellular ballooning, and diagnosis of NASH (P 1.5 × ULN was independently associated with advanced hepatic fibrosis (odds ratio [OR], 1.66; 95% confidence interval [CI], 1.05-2.62; P = 0.028) and increased NAFLD Activity Score (NAS) (OR, 1.99; 95% CI, 1.06-3.75; P = 0.033). A SF >1.5 × ULN is associated with hepatic iron deposition, a diagnosis of NASH, and worsened histologic activity and is an independent predictor of advanced hepatic fibrosis among patients with NAFLD. Furthermore, elevated SF is independently associated with higher NAS, even among patients without hepatic iron deposition. We conclude that SF is useful to identify NAFLD patients at risk for NASH and advanced fibrosis. Copyright © 2011 American Association for the Study of Liver Diseases.

  20. Interação de proteínas Cry1 e Vip3A de Bacillus thuringiensis para controle de lepidópteros-praga

    Directory of Open Access Journals (Sweden)

    Paula Cristina Brunini Crialesi-Legori

    2014-02-01

    Full Text Available O objetivo deste trabalho foi avaliar a suscetibilidade das lagartas Anticarsia gemmatalis (Lepidoptera: Erebidae e Chrysodeixis includens (Lepidoptera: Noctuidae às proteínas Cry1 e Vip3A, bem como determinar se há a interação entre essas proteínas no controle das duas espécies. Bioensaios com as proteínas isoladas e em combinações foram realizados, e as concentrações letais CL50 e CL90 foram estimadas para cada condição. As proteínas Cry1Aa, Cry1Ac e Vip3Af foram as mais efetivas no controle de A. gemmatalis, enquanto Cry1Ac, Vip3Aa e Vip3Af foram mais efetivas no de C. includens. As proteínas Cry1Ac e Cry1Ca causaram maior inibição do desenvolvimento das larvas sobreviventes à CL50, em ambas as espécies. Combinações entre Vip3A e Cry1 apresentam efeito sinérgico no controle das espécies e a combinação Vip3Aa+Cry1Ea destaca-se no controle de A. gemmatalis e C. includens. Essas proteínas combinadas são promissoras na construção de plantas piramidadas, para o controle simultâneo das pragas.

  1. EXTRAÇÃO E QUANTIFICAÇÃO DAS CLOROFILAS A E B NAS FOLHAS DA XANTHOSOMA SAGITTIFOLIUM

    OpenAIRE

    Gabriela Coelho Couceiro; Yara Barbosa Bustamante; Janicy Arantes Carvalho; Diego Pachelli Teixeira; Patrícia Marcondes dos Santos; Milton Beltrame Junior; Andreza Ribeiro Simioni

    2017-01-01

    A planta Xanthosoma sagittifolium (taioba) é uma hortaliça que pode suprir muitas necessidades, sendo uma fonte de proteínas, cálcio, ferro, vitamina C e outros nutrientes. As clorofilas são os pigmentos mais abundantes nas plantas e possuem vários benefícios à saúde. Sendo assim, foi analisada a presença das clorofilas na espécie Xanthosoma sagittifolium devido ao seu papel na alimentação e seus benefícios à saúde. A concentração das clorofilas a e b foram determinadas por espectrofotometria...

  2. Evidencia de la eficacia de la suplementación con proteínas en el rendimiento deportivo

    OpenAIRE

    Carrascal Quemada, César

    2014-01-01

    La suplementación con proteínas resulta una pieza clave en el deporte. Se ha demotrado que tanto en deportes de fuerza como en resistencia supone una ayuda ergogénica eficaz que ayuda a mejorar la fuerza y la velocidad y acorta los tiempos de recuperación. Además se ha evidenciado que es necesario respetar una serie de tiempos (timing) para tomar la suplementación que dependerán del deporte y sus objetivos. Las proteínas tienen efecto sinérgicos con otros productos entre los que destacan los ...

  3. Vector para la coexpresión de varias proteínas heterólogas en cantidades equimolares

    OpenAIRE

    Daròs Arnau, José Antonio; Bedoya, Leonor; Martínez, Fernando

    2010-01-01

    La invención se refiere a un vector de expresión basado en la secuencia nucleotídica del genoma de un Potyvirus, preferiblemente del virus del grabado del tabaco, que alberga una secuencia nucleotídica que codifica para al menos una proteína heteróloga, preferiblemente para dos y más preferiblemente para tres proteínas heterólogas. Las proteínas heterólogas se expresan, en la célula transfectada con este vector, como parte de la poliproteína viral y se encuentran flanq...

  4. A Liderança Emocional nas Organizações

    OpenAIRE

    Antonholi, Aparecida Iembo

    2013-01-01

     RESUMO A liderança, por muitos anos, foi estudada como traços de personalidade e  como a capacidade de influenciar pessoas. Este artigo aborda conceitos tradicionais e enfatiza as novas estratégias para a prática da liderança nas organizações, usando a inteligência emocional como ferramenta para aumentar a capacidade de gerenciamento das emoções e sentimentos. A competência social, de autoconsciência, de autogestão e de administração de relacionamentos são competências da inteligência emocio...

  5. Teste intradérmico com proteínas recombinantes de Mycobacterium bovis como antígenos em Cavia porcellus

    Directory of Open Access Journals (Sweden)

    Elaine S.P. Melo

    2014-10-01

    Full Text Available O teste intradérmico para o diagnóstico da tuberculose bovina utiliza derivados proteicos purificados (PPD de Mycobacterium bovis que são capazes de induzir reações de hipersensibilidade em animais infectados. No entanto, apresenta baixa especificidade devido à ocorrência de reações cruzadas com outras micobactérias. Neste sentido, o objetivo desse trabalho foi produzir proteínas recombinantes (ESAT-6, PE13, PE5 e ESX-1 de Mycobacterium bovis e avaliá-las como antígenos em teste intradérmico utilizando Cavia porcellus como modelo, e verificar se as condições empregadas na purificação (nativa ou desnaturante interferem no desempenho antigênico dessas proteínas. As proteínas foram testadas em Cavia porcellus previamente sensibilizados com cepa M. bovis AN5 inativada, individualmente (160 µg ou combinadas na forma de um coquetel (40 µg cada. O coquetel de proteínas induziu reações de hipersensibilidade nos animais sensibilizados significativamente superiores (p=0,002 as observadas nos animais não sensibilizados, possibilitando diferenciação. No entanto, as proteínas isoladamente não foram capazes de promover essa diferenciação. As condições de solubilização e purificação influenciaram o desempenho antigênico da proteína ESAT-6, pois, quando produzida em condição desnaturante desencadeou reações inespecíficas nos animais não sensibilizados, enquanto que aquela produzida em condições nativas e aplicada em concentrações de 6, 12, 24 e 48µg induziu reações significativas apenas nos animais sensibilizados, confirmando o seu potencial como antígeno.

  6. Comportamento proativo nas organizações: o efeito dos valores pessoais

    Directory of Open Access Journals (Sweden)

    Meiry Kamia

    Full Text Available O comportamento proativo é definido como um conjunto de comportamentos extrapapel em que o trabalhador busca espontaneamente mudanças no seu ambiente de trabalho, soluciona e antecipa-se aos problemas, visando a metas de longo prazo que beneficiam a organização. Este estudo teve por objetivo investigar a relação entre os valores pessoais e o comportamento proativo nas organizações. Foram utilizados como instrumentos de medida o Personal Values Questionnaire e a Escala de Comportamento Proativo nas Organizações, ambos já validados para o Brasil. Após a eliminação dos casos extremos, a amostra ficou constituída por 325 funcionários de diferentes organizações. A análise de regressão linear revelou que os valores predizem significativamente os comportamentos proativos, apontando uma relação positiva do tipo motivacional estimulação (B= 0,205, p<0,01 e universalismo/benevolência (B=0,302, p<0,01 com proatividade, e negativa com o tipo motivacional tradição (B= -0,189, p<0,01, de acordo com o previsto pelo referencial teórico. As implicações para os estudos na área são discutidas.

  7. Toxicidade e capacidade de ligação de proteínas Cry1 a receptores intestinais de Helicoverpa armigera (Lepidoptera: Noctuidae

    Directory of Open Access Journals (Sweden)

    Isis Sebastião

    2015-11-01

    Full Text Available Resumo: O objetivo deste trabalho foi avaliar a toxicidade e a capacidade de ligação das proteínas Cry1Aa, Cry1Ab, Cry1Ac e Cry1Ca, de Bacillus thuringiensis, a receptores intestinais de Helicoverpa armigera. Realizou-se análise de ligação das proteínas ativadas às vesículas de membrana da microvilosidade apical (VMMA do intestino médio deH. armigera, além de ensaios de competição heteróloga para avaliar sua capacidade de ligação. Cry1Ac destacou-se como a proteína mais tóxica, seguida por Cry1Ab e Cry1Aa. A proteína Cry1Ca não foi tóxica às lagartas e, portanto, não foi possível determinar os seus parâmetros de toxicidade CL50 e CL90. As proteínas Cry1Aa, Cry1Ab e Cry1Ac são capazes de se ligar a um mesmo receptor nas membranas intestinais, o que aumenta o risco do desenvolvimento de resistência cruzada. Portanto, a utilização conjunta dessas proteínas deve ser evitada.

  8. Ensayos cristalográficos de complejos de ADN con proteínas HMG-box y fármacos

    OpenAIRE

    Gomez Jimenez, Fabiola Alejandra

    2016-01-01

    Las HMGB son proteínas nucleares que presentan el motivo “HMG-box”, con el que se unen al surco estrecho del ADN. Producen cambios estructurales en el mismo y están implicadas en diferentes enfermedades por lo que el estudio estructural de dichas proteínas unidas a ADN es de importancia en el desarrollo de estrategias terapéuticas. Por otra parte, los compuestos derivados de difenilo bisimidazolinio también se unen al surco estrecho del ADN, específicamente en zonas ricas en AT...

  9. Real World Uses For Nagios APIs

    Science.gov (United States)

    Singh, Janice

    2014-01-01

    This presentation describes the Nagios 4 APIs and how the NASA Advanced Supercomputing at Ames Research Center is employing them to upgrade its graphical status display (the HUD) and explain why it's worth trying to use them yourselves.

  10. IGF-I, leptina, insulina e proteínas associadas à qualidade do plasma seminal: ação local

    Directory of Open Access Journals (Sweden)

    Fernando Andrade Souza

    2010-12-01

    Full Text Available É possível que a expressividade de alguns elementos do plasma seminal dos bovinos, como proteínas e hormônios, possa servir como marcadores para sêmende alta ou baixa fertilidade. Vários estudos têm demonstrado a associação de proteínas do plasma seminalcom a fertilidade de touros. Dentre as mais estudadas, destacam-se aquelas com afinidade à heparina, que exercem importantes papéis na capacitaçãoespermática e na reação acrossômica. Alguns fatoresendócrinos e/ou locais, podem estar associados à expressividadee/ou função destas proteínas, auxiliandonas condições espermáticas favoráveis à fecundação. Dentre estes, destacam-se a insulina, a leptina e o fatorde crescimento semelhante à insulina do tipo I. Assim sendo, evidenciam diferenças entre animais,estando associados à estrutura e as condições metabólicasda célula espermática, auxiliando na determinação da qualidade do plasma seminal. Desta maneira,o estudo das proteínas do plasma seminal associadoà condição metabólica destes hormônios, presentesneste meio, pode servir como importante parâmetrode avaliação da condição reprodutiva do macho.

  11. 78 FR 12951 - TRICARE; Elimination of the Non-Availability Statement (NAS) Requirement for Non-Emergency...

    Science.gov (United States)

    2013-02-26

    ... an annual effect of $100 million or more on the national economy or which would have other... maternity services, the ASD(HA) may require an NAS prior to TRICARE cost-sharing for additional services...

  12. Procedimiento para la obtención de levaduras vínicas superproductoras de manoproteínas mediante tecnologías no recombinantes

    OpenAIRE

    Barcenilla Moraleda, José María; González Ramos, Daniel; Tabera, Laura; González García, Ramón

    2008-01-01

    Procedimiento para la obtención de levaduras vínicas superproductoras de manoproteínas mediante tecnologías no recombinantes. Procedimiento para obtener cepas de levaduras superproductoras de manoproteínas mediante la selección de mutantes resistentes a la toxina K9, cepas obtenibles por dicho procedimiento y usos.

  13. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    International Nuclear Information System (INIS)

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2015-01-01

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results

  14. Kids at CERN Grids for Kids programme leads to advanced computing knowledge.

    CERN Multimedia

    2008-01-01

    Children as young as 10 are learning computing skills, such as middleware, parallel processing and supercomputing, at CERN, the European Organisation for Nuclear Research, last week. The initiative for 10 to 12 years olds is part of the Grids for Kids programme, which aims to introduce Grid computing as a tool for research.

  15. De la tesis doctoral de Tiselius a la proteómica: setenta y cinco años de electroforesis de proteínas

    Directory of Open Access Journals (Sweden)

    Fernández Santarén, Juan

    2004-02-01

    Full Text Available El término «proteoma» se utilizó por primera vez en 1995 para describir las proteínas codificadas por un genoma. De forma involuntaria el proteoma se ha ido transformando en «proteomica», un conjunto de técnicas al que se ha elevado al rango de disciplina científica. ¿Qué es la proteomica? En esencia es el estudio, a gran escala, de la composición y propiedades de las proteínas que integran un sistema biológico. Su ejecución práctica se basa fundamentalmente en dos pilares. En primer lugar la separación de las mezclas complejas de proteínas, proceso que habitualmente se lleva a cabo mediante electroforesis bidimensional y en segundo lugar la identificación de las proteínas separadas para lo que actualmente se recurre a la espectrometría de masas.…

  16. Caracterización de las proteínas totales de tres ecotipos de maca (Lepidium peruvianum G. Chacón, mediante electroforesis unidimensional y bidimensional

    Directory of Open Access Journals (Sweden)

    Mario Monteghirfo

    2007-12-01

    Full Text Available Objetivo: Caracterizar las proteínas solubles que se encuentran en la raíz del Lepidium peruvianum G. Chacón, maca, mediante electroforesis unidimensional y electroforesis bidimensional. Diseño: Estudio de tipo observacional y transversal. Lugar: Centro de Investigación de Bioquímica y Nutrición Alberto Guzmán Barrón. Facultad de Medicina, Universidad Nacional Mayor de San Marcos. Lima, Perú. Materiales: Raíces de Lepidium peruvianum G. Chacón ‘maca’ de los ecotipos blanco, amarillo y morado, procedentes de Junín que fueron obtenidas a través de la Universidad Nacional del Centro del Perú. Métodos: La extracción de las proteínas totales solubles se realizó con una solución antioxidante, seguida de electroforesis unidimensional y bidimensional para su caracterización. Principales medidas de resultados: Número de proteínas solubles, peso molecular de las proteínas y puntos isoelectricos de las proteínas más abundantes. Resultados: El análisis electroforético unidimensional mostró predominio de 2 proteínas (72% de las proteínas solubles totales. Una de 22,5 kDa, denominada en el presente trabajo ‘macatina’ (51% de la proteína total; la otra de 17,0 kDa (21% de la proteína soluble total. El mapa electroforético bidimensional mostró que tanto la ‘macatina’ como la proteína de 17,0 kDa son básicas y presentan 3 isómeros de carga que se distribuyen en un rango de punto isoeléctrico (pI de 7,1 a 8,2. Conclusiones: Las proteínas solubles mostraron un patrón electroforético complejo, siendo la macatina la proteína más abundante.

  17. Remotely Operated Aircraft (ROA) Impact on the National Airspace System (NAS) Work Package: Automation Impacts of ROA's in the NAS

    Science.gov (United States)

    2005-01-01

    The purpose of this document is to analyze the impact of Remotely Operated Aircraft (ROA) operations on current and planned Air Traffic Control (ATC) automation systems in the En Route, Terminal, and Traffic Flow Management domains. The operational aspects of ROA flight, while similar, are not entirely identical to their manned counterparts and may not have been considered within the time-horizons of the automation tools. This analysis was performed to determine if flight characteristics of ROAs would be compatible with current and future NAS automation tools. Improvements to existing systems / processes are recommended that would give Air Traffic Controllers an indication that a particular aircraft is an ROA and modifications to IFR flight plan processing algorithms and / or designation of airspace where an ROA will be operating for long periods of time.

  18. Estruturas de poder nas redes de financiamento político nas eleições de 2010 no Brasil

    Directory of Open Access Journals (Sweden)

    Rodrigo Rossi Horochovski

    2016-04-01

    Full Text Available Resumo Este artigo analisa os 299.968 relacionamentos estabelecidos entre os 251.665 doadores e/ou receptores de recursos financeiros legais abrangidos pelas prestações de contas das campanhas nas eleições de 2010 no Brasil, englobando todos os candidatos e partidos. Aplica-se aos dados do Tribunal Superior Eleitoral (TSE a metodologia de análise de redes sociais e tratamentos estatísticos complementares para a exploração da topologia das sub-redes (componentes e dos cálculos de centralidade dos atores – candidatos, agentes partidários e financiadores privados. Os resultados expõem a alta conectividade e assimetria da rede de financiamento eleitoral no Brasil e mostram que o posicionamento dos atores em estratos da rede é determinante para o desempenho tanto de candidatos quanto de financiadores, revelando, de uma forma inédita, uma elite no poder político-eleitoral brasileiro.

  19. A regra de ouro e a ética nas organizações

    Directory of Open Access Journals (Sweden)

    Hermano Roberto Thiry-Cherques

    Full Text Available Este artigo examina o princípio da regra de ouro e questiona a sua ampla aplicação nas organizações. O texto resume a trajetória da regra na história do pensamento filosófico e, a partir da crítica de Kant, apresenta argumentos que expõem a sua fragilidade lógica.

  20. Alergia às Proteínas do Leite de Vaca: Uma Nova Era

    Directory of Open Access Journals (Sweden)

    Filipe Benito Garcia

    2016-01-01

    lhe atualmente uma dieta livre. Esta estratégia terapêutica mostra-se revolucionária por permitir modificar a história natural da alergia às proteínas do leite de vaca grave e persistente, com impacto muito positivo na qualidade de vida dos doentes e da sua família.

  1. Use of QUADRICS supercomputer as embedded simulator in emergency management systems; Utilizzo del calcolatore QUADRICS come simulatore in linea in un sistema di gestione delle emergenze

    Energy Technology Data Exchange (ETDEWEB)

    Bove, R.; Di Costanzo, G.; Ziparo, A. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dip. Energia

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system.

  2. EFICIÊNCIA DO USO DE SISTEMAS ESPECIALISTAS NAS ÁREAS DA SAÚDE

    Directory of Open Access Journals (Sweden)

    Gabriel Oliveira Tomedi

    2017-02-01

    Full Text Available O Sistema Especialista é uma das técnicas da Inteligência Artificial voltada para o auxílio de profissionais de determinado domínio. Em outras palavras, são definidos como programas computacionais que buscam resolver problemas de um determinado campo do conhecimento da mesma maneira que um especialista. Diante disto, o objetivo do estudo visa realizar um levantamento bibliográfico sobre a eficácia do emprego desses sistemas nas áreas da saúde. Para isto foram levantados estudos dos últimos doze anos, nas bases de dados SciELO, Google Acadêmico. Pode-se observar, com o presente estudo de revisão, foram relatados uma maior precisão para os diagnósticos, menor tempo de atendimento, melhoria no desempenho do profissional e um fácil acesso a informações. Com base nestes relatos, pode-se concluir que a utilização de sistemas Especialistas é efetivo, pois melhorou diversos aspectos nessas áreas.

  3. Resonance – Journal of Science Education | News

    Indian Academy of Sciences (India)

    Programming Languages - A Brief Review. V Rajaraman ... V Rajaraman1 2. IBM Professor of Information Technology, Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore 560012, India; Hon.Professor, Supercomputer Education & Research Centre Indian Institute of Science, Bangalore 560012, India ...

  4. Euphorbiaceae Juss: espécies ocorrentes nas restingas do Estado do Rio de Janeiro, Brasil

    Directory of Open Access Journals (Sweden)

    Arline Souza de Oliveira

    1989-01-01

    Full Text Available O presente trabalho relaciona as espécies da família Euphorbiaceae Juss. encontradas nas restingas do Estado do Rio de Janeiro. As coletas foram realizadas no período de 1983 a 1988 em vários trechos do litoral fluminense, nas diferentes faixas de vegetação. Além da listagem contendo 31 espécies de 16 gêneros, aborda-se também a forma biológica (porte destes taxa, para uma melhor compreensão desta famflia na composição florística das restingas.This work presents a list of the species of the Euphorbiaceae Juss., which are signalled for the sandy coastal plains - restinga - of the Estado do Rio de Janeiro, Brazil. The life-forms of the taxa are registred.

  5. Desenvolvimento sustentável como uma forma de mitigar o impacto negativo da globalização nas comunidades locais

    Directory of Open Access Journals (Sweden)

    Bonder, Cintia

    2003-01-01

    Full Text Available Este artigo tem por objetivo mostrar alguns impactos da globalização nas comunidades locais e como o desenvolvimento sustentável pode trabalhar estes impactos. Para tanto, apresenta alguns conceitos de globalização e suas correntes de pensamento; trata de discutir o desenvolvimento sustentável e a globalização; e por fim, discute o impacto da globalização nas comunidades locais e como minimizá-los

  6. Summaries of research and development activities by using supercomputer system of JAEA in FY2015. April 1, 2015 - March 31, 2016

    International Nuclear Information System (INIS)

    2017-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As shown in the fact that about 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology. In FY2015, the system was used for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue, as well as for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great number of R and D results accomplished by using the system in FY2015, as well as user support, operational records and overviews of the system, and so on. (author)

  7. Summaries of research and development activities by using supercomputer system of JAEA in FY2014. April 1, 2014 - March 31, 2015

    International Nuclear Information System (INIS)

    2016-02-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As shown in the fact that about 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology. In FY2014, the system was used for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue, as well as for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great number of R and D results accomplished by using the system in FY2014, as well as user support, operational records and overviews of the system, and so on. (author)

  8. Summaries of research and development activities by using supercomputer system of JAEA in FY2013. April 1, 2013 - March 31, 2014

    International Nuclear Information System (INIS)

    2015-02-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. About 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2013, the system was used not only for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science, but also for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue. This report presents a great amount of R and D results accomplished by using the system in FY2013, as well as user support, operational records and overviews of the system, and so on. (author)

  9. Summaries of research and development activities by using supercomputer system of JAEA in FY2012. April 1, 2012 - March 31, 2013

    International Nuclear Information System (INIS)

    2014-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As more than 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2012, the system was used not only for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science, but also for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as apriority issue. This report presents a great amount of R and D results accomplished by using the system in FY2012, as well as user support, operational records and overviews of the system, and so on. (author)

  10. Summaries of research and development activities by using supercomputer system of JAEA in FY2011. April 1, 2011 - March 31, 2012

    International Nuclear Information System (INIS)

    2013-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As more than 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2011, the system was used for analyses of the accident at the Fukushima Daiichi Nuclear Power Station and establishment of radioactive decontamination plan, as well as the JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great amount of R and D results accomplished by using the system in FY2011, as well as user support structure, operational records and overviews of the system, and so on. (author)

  11. Study of plutonium disposition using the GE Advanced Boiling Water Reactor (ABWR)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-30

    The end of the cold war and the resulting dismantlement of nuclear weapons has resulted in the need for the U.S. to disposition 50 to 100 metric tons of excess of plutonium in parallel with a similar program in Russia. A number of studies, including the recently released National Academy of Sciences (NAS) study, have recommended conversion of plutonium into spent nuclear fuel with its high radiation barrier as the best means of providing long-term diversion resistance to this material. The NAS study {open_quotes}Management and Disposition of Excess Weapons Plutonium{close_quotes} identified light water reactor spent fuel as the most readily achievable and proven form for the disposition of excess weapons plutonium. The study also stressed the need for a U.S. disposition program which would enhance the prospects for a timely reciprocal program agreement with Russia. This summary provides the key findings of a GE study where plutonium is converted into Mixed Oxide (MOX) fuel and a 1350 MWe GE Advanced Boiling Water Reactor (ABWR) is utilized to convert the plutonium to spent fuel. The ABWR represents the integration of over 30 years of experience gained worldwide in the design, construction and operation of BWRs. It incorporates advanced features to enhance reliability and safety, minimize waste and reduce worker exposure. For example, the core is never uncovered nor is any operator action required for 72 hours after any design basis accident. Phase 1 of this study was documented in a GE report dated May 13, 1993. DOE`s Phase 1 evaluations cited the ABWR as a proven technical approach for the disposition of plutonium. This Phase 2 study addresses specific areas which the DOE authorized as appropriate for more in-depth evaluations. A separate report addresses the findings relative to the use of existing BWRs to achieve the same goal.

  12. Getting To Exascale: Applying Novel Parallel Programming Models To Lab Applications For The Next Generation Of Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Dube, Evi [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shereda, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nau, Lee [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Harris, Lance [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2010-09-27

    As supercomputing moves toward exascale, node architectures will change significantly. CPU core counts on nodes will increase by an order of magnitude or more. Heterogeneous architectures will become more commonplace, with GPUs or FPGAs providing additional computational power. Novel programming models may make better use of on-node parallelism in these new architectures than do current models. In this paper we examine several of these novel models – UPC, CUDA, and OpenCL –to determine their suitability to LLNL scientific application codes. Our study consisted of several phases: We conducted interviews with code teams and selected two codes to port; We learned how to program in the new models and ported the codes; We debugged and tuned the ported applications; We measured results, and documented our findings. We conclude that UPC is a challenge for porting code, Berkeley UPC is not very robust, and UPC is not suitable as a general alternative to OpenMP for a number of reasons. CUDA is well supported and robust but is a proprietary NVIDIA standard, while OpenCL is an open standard. Both are well suited to a specific set of application problems that can be run on GPUs, but some problems are not suited to GPUs. Further study of the landscape of novel models is recommended.

  13. Nas dobras do legal e do ilegal: Ilegalismos e jogos de poder nas tramas da cidade

    Directory of Open Access Journals (Sweden)

    Vera da Silva Telles

    2009-07-01

    Full Text Available Este artigo discute as relações redefinidas entre o informal, o ilegal e o ilícito que acompanham as formas contemporâneas de produção e circulação de riquezas. Interroga-se o modo como essas redefinições afetam ordenamentos sociais e jogos de poder em três categorias encontradas na cidade de São Paulo: os ilegalismos difusos inscritos nas “mobilidades laterais” do trabalhador urbano; os ilegalismos que passam pelos circuitos do comercio informal no centro nervoso da economia urbana da cidade; e a periferia paulista onde todos esses fios se enredam em torno do varejo da droga. This article discusses the redefined relationships between the informal, the illegal and the illicit which follow contemporary forms of production and circulation of wealth. The paper explores how these redefinitions affect social orders and power struggles in relation to three situations in Sao Paulo: the illegalisms diffused from “lateral mobility” of the urban worker; the illegalisms of informal commerce in the nerve centre of the urban economy; and the poor São Paulo outskirts where all these strands intertwine around drug dealing.

  14. Latvijas kā medicīnas tūrisma galamērķa konkurētspēju ietekmējošie faktori

    OpenAIRE

    Sidorenko, Anna

    2012-01-01

    Bakalaura darba temats ir «Latvijas kā medicīnas tūrisma galamērķa konkurētspēju ietekmējošie faktori». Ņemot vērā starptautisko pieredzi, medicīnas pakalpojumu sniegšana ārvalstniekiem var nodrošināt papildus ienākumu avotu nacionālajai ekonomikai. Bakalaura darba mērķis ir identificēt un izpētīt faktorus, kas ietekmē Latvijas kā medicīnas tūrisma galamērķa konkurētspēju un izstrādāt priekšlikumus medicīnas pakalpojumu sniedzējiem un valsts pārvaldes institūcijām ar nolūku veicināt Latvij...

  15. Advances in Supercomputing for the Modeling of Atomic Processes in Plasmas

    International Nuclear Information System (INIS)

    Ludlow, J. A.; Ballance, C. P.; Loch, S. D.; Lee, T. G.; Pindzola, M. S.; Griffin, D. C.; McLaughlin, B. M.; Colgan, J.

    2009-01-01

    An overview will be given of recent atomic and molecular collision methods developed to take advantage of modern massively parallel computers. The focus will be on direct solutions of the time-dependent Schroedinger equation for simple systems using large numerical lattices, as found in the time-dependent close-coupling method, and for configuration interaction solutions of the time-independent Schroedinger equation for more complex systems using large numbers of basis functions, as found in the R-matrix with pseudo-states method. Results from these large scale calculations are extremely useful in benchmarking less accurate theoretical methods and experimental data. To take full advantage of future petascale and exascale computing resources, it appears that even finer grain parallelism will be needed.

  16. Konkurētspējas analīze medicīnas tūrisma attīstībai Latvijā.

    OpenAIRE

    Kadincovs, Artūrs

    2013-01-01

    Maģistra darba tēma ir „Konkurētspējas analīze medicīnas tūrisma attīstībai Latvijā”. Medicīnas tūrisms mūsdienās ir eksporta pakalpojums ar augstu pievienoto vērtību, kurš pozitīva scenārija rezultātā nes lielus ienākumus uzņēmējdarbības videi un valsts ekonomikai un rada labi apmaksātu speciālistu pieprasījumu. Pēc autora domām, medicīnas tūrisma attīstībai Latvijā ir perspektīvs scenārijs, jo Latvijai ir visss nepieciešamais, gan kvalificēts darbaspēks, gan resursi, turklāt pietiekoši ...

  17. Study of plutonium disposition using existing GE advanced Boiling Water Reactors

    Energy Technology Data Exchange (ETDEWEB)

    1994-06-01

    The end of the cold war and the resulting dismantlement of nuclear weapons has resulted in the need for the US to dispose of 50 to 100 metric tons of excess of plutonium in a safe and proliferation resistant manner. A number of studies, including the recently released National Academy of Sciences (NAS) study, have recommended conversion of plutonium into spent nuclear fuel with its high radiation barrier as the best means of providing permanent conversion and long-term diversion resistance to this material. The NAS study ``Management and Disposition of Excess Weapons Plutonium identified Light Water Reactor spent fuel as the most readily achievable and proven form for the disposition of excess weapons plutonium. The study also stressed the need for a US disposition program which would enhance the prospects for a timely reciprocal program agreement with Russia. This summary provides the key findings of a GE study where plutonium is converted into Mixed Oxide (MOX) fuel and a typical 1155 MWe GE Boiling Water Reactor (BWR) is utilized to convert the plutonium to spent fuel. A companion study of the Advanced BWR has recently been submitted. The MOX core design work that was conducted for the ABWR enabled GE to apply comparable fuel design concepts and consequently achieve full MOX core loading which optimize plutonium throughput for existing BWRs.

  18. Study of plutonium disposition using existing GE advanced Boiling Water Reactors

    International Nuclear Information System (INIS)

    1994-01-01

    The end of the cold war and the resulting dismantlement of nuclear weapons has resulted in the need for the US to dispose of 50 to 100 metric tons of excess of plutonium in a safe and proliferation resistant manner. A number of studies, including the recently released National Academy of Sciences (NAS) study, have recommended conversion of plutonium into spent nuclear fuel with its high radiation barrier as the best means of providing permanent conversion and long-term diversion resistance to this material. The NAS study ''Management and Disposition of Excess Weapons Plutonium identified Light Water Reactor spent fuel as the most readily achievable and proven form for the disposition of excess weapons plutonium. The study also stressed the need for a US disposition program which would enhance the prospects for a timely reciprocal program agreement with Russia. This summary provides the key findings of a GE study where plutonium is converted into Mixed Oxide (MOX) fuel and a typical 1155 MWe GE Boiling Water Reactor (BWR) is utilized to convert the plutonium to spent fuel. A companion study of the Advanced BWR has recently been submitted. The MOX core design work that was conducted for the ABWR enabled GE to apply comparable fuel design concepts and consequently achieve full MOX core loading which optimize plutonium throughput for existing BWRs

  19. Propriedades fisiológicas-funcionais das proteínas do soro de leite Physiological-functional properties of milk whey proteins

    Directory of Open Access Journals (Sweden)

    Valdemiro Carlos Sgarbieri

    2004-12-01

    Full Text Available O presente artigo coloca em destaque as propriedades multifuncionais das proteínas presentes no soro de leite bovino, a começar pelo colostro que contém essas proteínas em concentrações muito elevadas e que tem por função garantir a proteção e a imunidade dos recém-nascidos. Essas mesmas proteínas continuam no leite, porém em concentrações bastante reduzidas. A utilização dessas proteínas nas formas de concentrados e isolados protéicos evidenciam propriedades muito favoráveis à saúde no sentido de diminuir o risco de doenças infecciosas e também as consideradas crônicas e/ou degenerativas. Enfatizou-se as propriedades das proteínas do soro de leite e de peptídios delas resultantes no estímulo ao sistema imunológico, na proteção contra microrganismos patogênicos e contra alguns tipos de vírus como o HIV e o vírus da hepatite C, na proteção contra vários tipos de câncer, particularmente de cólon, na proteção da mucosa gástrica contra agressão por agentes ulcerogênicos, evidenciou-se várias linhas de ação protetora das proteínas de soro contra agentes condicionadores de problemas cardiovasculares. Com base em várias propriedades funcionais das proteínas do soro de leite, discutiu-se a vantagem e os benefícios de seu uso como suplemento alimentar para atletas e esportistas em geral. Os possíveis benefícios de vários fatores de crescimento celular, presentes no soro de leite também foram discutidos.This article emphasizes the multifunctional properties of the bovine milk whey proteins, starting with the colostrum where these proteins occur in high concentrations and are reputed as responsible for the protection and passive immunization of the newborn babies. The same proteins found in colostrum in high concentrations are found in milk although at much lower concentrations. The utilization of the milk whey proteins in the form of concentrates or isolates has been found to be highly beneficial to

  20. Time-resolved photoluminescence of Ga(NAsP) multiple quantum wells grown on Si substrate: Effects of rapid thermal annealing

    Energy Technology Data Exchange (ETDEWEB)

    Woscholski, R., E-mail: ronja.woscholski@physik.uni-marburg.de; Shakfa, M.K.; Gies, S.; Wiemer, M.; Rahimi-Iman, A.; Zimprich, M.; Reinhard, S.; Jandieri, K.; Baranovskii, S.D.; Heimbrodt, W.; Volz, K.; Stolz, W.; Koch, M.

    2016-08-31

    Time-resolved photoluminescence (TR-PL) spectroscopy has been used to study the impact of rapid thermal annealing (RTA) on the optical properties and carrier dynamics in Ga(NAsP) multiple quantum well heterostructures (MQWHs) grown on silicon substrates. TR-PL measurements reveal an enhancement in the PL efficiency when the RTA temperature is increased up to 925 °C. Then, the PL intensity dramatically decreases with the annealing temperature. This behavior is explained by the variation of the disorder degree in the studied structures. The analysis of the low-temperature emission-energy-dependent PL decay time enables us to characterize the disorder in the Ga(NAsP) MQWHs. The theoretically extracted energy-scales of disorder confirm the experimental observations. - Highlights: • Ga(NAsP) multiple quantum well heterostructures (MQWHs) grown on silicon substrates • Impact of rapid thermal annealing on the optical properties and carrier dynamics • Time resolved photoluminescence spectroscopy was applied. • PL transients became continuously faster with increasing annealing temperature. • Enhancement in the PL efficiency with increasing annealing temperature up to 925 °C.

  1. Time-resolved photoluminescence of Ga(NAsP) multiple quantum wells grown on Si substrate: Effects of rapid thermal annealing

    International Nuclear Information System (INIS)

    Woscholski, R.; Shakfa, M.K.; Gies, S.; Wiemer, M.; Rahimi-Iman, A.; Zimprich, M.; Reinhard, S.; Jandieri, K.; Baranovskii, S.D.; Heimbrodt, W.; Volz, K.; Stolz, W.; Koch, M.

    2016-01-01

    Time-resolved photoluminescence (TR-PL) spectroscopy has been used to study the impact of rapid thermal annealing (RTA) on the optical properties and carrier dynamics in Ga(NAsP) multiple quantum well heterostructures (MQWHs) grown on silicon substrates. TR-PL measurements reveal an enhancement in the PL efficiency when the RTA temperature is increased up to 925 °C. Then, the PL intensity dramatically decreases with the annealing temperature. This behavior is explained by the variation of the disorder degree in the studied structures. The analysis of the low-temperature emission-energy-dependent PL decay time enables us to characterize the disorder in the Ga(NAsP) MQWHs. The theoretically extracted energy-scales of disorder confirm the experimental observations. - Highlights: • Ga(NAsP) multiple quantum well heterostructures (MQWHs) grown on silicon substrates • Impact of rapid thermal annealing on the optical properties and carrier dynamics • Time resolved photoluminescence spectroscopy was applied. • PL transients became continuously faster with increasing annealing temperature. • Enhancement in the PL efficiency with increasing annealing temperature up to 925 °C

  2. Identification of nitrogen- and host-related deep-level traps in n-type GaNAs and their evolution upon annealing

    International Nuclear Information System (INIS)

    Gelczuk, Ł.; Kudrawiec, R.; Henini, M.

    2014-01-01

    Deep level traps in as-grown and annealed n-GaNAs layers (doped with Si) of various nitrogen concentrations (N = 0.2%, 0.4%, 0.8%, and 1.2%) were investigated by deep level transient spectroscopy. In addition, optical properties of GaNAs layers were studied by photoluminescence and contactless electroreflectance. The identification of N- and host-related traps has been performed on the basis of band gap diagram [Kudrawiec, Appl. Phys. Lett. 101, 082109 (2012)], which assumes that the activation energy of electron traps of the same microscopic nature decreases with the rise of nitrogen concentration in accordance with the N-related shift of the conduction band towards trap levels. The application of this diagram has allowed to investigate the evolution of donor traps in GaNAs upon annealing. In general, it was observed that the concentration of N- and host-related traps decreases after annealing and PL improves very significantly. However, it was also observed that some traps are generated due to annealing. It explains why the annealing conditions have to be carefully optimized for this material system.

  3. Os sentidos de compreensão nas teorias de Weber e Habermas

    Directory of Open Access Journals (Sweden)

    José Geraldo A. B. Poker

    2013-01-01

    Full Text Available Partindo do pressuposto de que a teoria social elaborada por Habermas em muito se assemelha àquela construída por M. Weber, procedeu-se a um estudo comparativo com a intenção de identificar as formas pelas quais Weber e Habermas elaboraram o conceito de compreensão, ao mesmo tempo em que e o elegeram, cada um a seu modo, como instrumento metodológico adequado às dificuldades da produção de conhecimento científico nas Ciências Sociais. Tanto para Weber, como para Habermas, o conhecimento nas Ciências Sociais não consegue escapar das influências diretas da subjetividade do cientista, como também não é capaz de se proteger das contingências histórico-culturais aos quais inevitavelmente toda ação humana está vinculada. Por isso, fundamentados em suas próprias razões, tanto Weber quanto Habermas apontam a compreensão como a forma possível de conhecimento, o que implica a renúncia às pretensões explicativas e à produção de teorias gerais de fundamentação última, que são típicas das ciências convencionais.

  4. Enteropatía perdedora de proteínas en pacientes con corazón univentricular

    Directory of Open Access Journals (Sweden)

    Alfredo Naranjo Ugalde

    Full Text Available Introducción: la enteropatía perdedora de proteínas puede aparecer en la evolución de los pacientes con corazón univentricular que sobreviven a la derivación cavopulmonar total. Una vez que se diagnostica, la mortalidad es alta. Objetivo: identificar los posibles factores de riesgo de esta complicación. Métodos: se realizó un estudio de cohorte prospectivo de la evolución en 74 pacientes con derivación cavopulmonar total, intervenidos en el Cardiocentro Pediátrico "William Soler", desde enero de 1992 hasta enero de 2011. Resultados: el tiempo promedio de evolución fue de 8 años. Sufrió enteropatía perdedora de proteínas 8,1 % de los pacientes. Se presentó con mayor frecuencia en los operados con la técnica intratrial, en los operados con más de 6 años de edad, y en quienes sufrieron derrames pleurales persistentes en el posoperatorio inmediato. Se encontró relación significativa entre la enteropatía y la disfunción ventricular posoperatoria, con RR= 11,45 (IC: 95 %: 2,37 a 55,16. El análisis multivariado identificó a la disfunción ventricular como factor de riesgo. Conclusión: la detección de disfunción ventricular en la evolución del paciente con derivación cavopulmonar debe orientar el tratamiento, en aras de evitar la aparición de enteropatía perdedora de proteínas.

  5. CLUSIACEAE LINDL. E HYPERICACEAE JUSS. NAS RESTINGAS DO ESTADO DO PARÁ, AMAZÔNIA ORIENTAL, BRASIL

    Directory of Open Access Journals (Sweden)

    Thiago Teixeira de Oliveira

    2015-12-01

    Full Text Available O estudo teve como objetivo o tratamento florístico-taxonômico de Clusiaceae e Hypericaceae para as restingas do Estado do Pará. O material foi obtido nos acervos dos Herbários do Museu Paraense Emílio Goeldi (MG, Embrapa Amazônia Oriental (IAN e coletas realizadas na praia do Crispim, Marapanim-PA. As descrições das espécies foram fundamentadas nas características morfológicas e em suas respectivas variações para a flora, foi elaborada uma chave para identificação das mesmas. As famílias encontram-se representadas por quatro táxons, onde Clusiaceae é composta por Clusia fockeana Miq., C. hoffmannseggiana Schltdl., e C. panapanari (Aubl. Choisy., e Hypericaceae apenas por Vismia guianensis (Aubl. Choisy. C. panapanari apresenta-se restrita à formação de mata de restinga. C. hoffmannseggiana e V. guianensis apresentaram distribuição mais ampla nas restingas paraenses. No levantamento feito na coleção nos herbários, constatou-se que coletas, das famílias nas restingas paraenses, ainda são escassas e o esforço de coletas poderá trazer mais informações sobre período de floração e frutificação, além de um provável incremento de novos registros para a área de estudo. Palavras-chave: Cebola brava, Litoral paraense, Taxonomia. DOI: http://dx.doi.org/10.18561/2179-5746/biotaamazonia.v5n4p15-21

  6. Investigation of the role of N on the optical efficiency of InGaNAs nanostructures for usage on the optoelectronic industry and optical telecommunication

    Directory of Open Access Journals (Sweden)

    Hamid Haratizadeh

    2007-12-01

    Full Text Available   Recently, the quaternary InGaAsN alloy system has attracted a great deal of attention due to its potential application in devices such as next generation multi-junction solar cells and optoelectronic devices for example laser diodes for optical communications in IR region. In this paper, we have investigated the role of nitrogen on the improvement of optical efficiency in InGaNAs nanostructures by photoluminescence spectroscopy. These characterizations are because of variation of InGaNAs band structure due to existence of nitrogen, and could be explained by using band anticrossing model which is a result of interaction between the extended conduction band of the InGaAs matrix (EM and the nitrogen-related localized level (EN. The band-gap of InGaNAs is very sensitive to the nitrogen content, so it has decreased by increasing of nitrogen content. Therefore accessibility to emission light wavelength at IR region is controllable. Moreover, nitrogen has created the potential fluctuations in the InGaNAs so it is the cause of trap centers that leads to localized excitons. Thus the probability of exciton recombination has increased and improved optical efficiency of these structures. But in other cases, nitrogen has made fluctuations especially in the common surface of the well and barrier in InGaNAs quantum structures so they increase non-radiative recombination.

  7. Regulación de las metalotioneínas durante el estrés y la inflamación, y su influencia durante la respuesta inflamatoria

    OpenAIRE

    Carrasco Trancoso, Javier

    2000-01-01

    Regulación de las metalotioneínas durante el estrés y la inflamación, y su influencia durante la respuesta inflamatoria. Las metalotioneínas (MTs) son unas proteínas de bajo peso molecular (6-7 kDa) con capacidad para ligar metales pesados como el Zn y el Cu. En los roedores existen cuatro isoformas diferentes denominadas MT-I, -II, -III y IV. Las MT-I y MT-II se expresan prácticamente en todos los tejidos del organismo y son inducibles por metales pesados, agentes oxidantes, hormonas, la inf...

  8. Supercomputer methods for the solution of fundamental problems of particle physics

    International Nuclear Information System (INIS)

    Moriarty, K.J.M.; Rebbi, C.

    1990-01-01

    The authors present motivation and methods for computer investigations in particle theory. They illustrate the computational formulation of quantum chromodynamics and selected application to the calculation of hadronic properties. They discuss possible extensions of the methods developed for particle theory to different areas of applications, such as cosmology and solid-state physics, that share common methods. Because of the commonality of methodology, advances in one area stimulate advances in other ares. They also outline future plans of research

  9. Unmanned Aircraft Systems (UAS) Integration in the National Airspace System (NAS) Project: KDP-A for Phase 2 Minimum Operational Performance Standards

    Science.gov (United States)

    Grindle, Laurie; Hackenberg, Davis L.

    2016-01-01

    UAS Integration in the NAS Project has: a) Developed Technical Challenges that are crucial to UAS integration, aligned with NASA's Strategic Plan and Thrusts, and support FAA standards development. b) Demonstrated rigorous project management processes through the execution of previous phases. c) Defined Partnership Plans. d) Established path to KDP-C. Request approval of Technical Challenges, execution of partnerships and plans, and execution of near-term FY17 activities. There is an increasing need to fly UAS in the NAS to perform missions of vital importance to National Security and Defense, Emergency Management, and Science. There is also an emerging need to enable commercial applications such as cargo transport (e.g. FedEx). Unencumbered NAS Access for Civil/Commercial UAS. Provide research findings, utilizing simulation and flight tests, to support the development and validation of DAA and C2 technologies necessary for integrating Unmanned Aircraft Systems into the National Airspace System.

  10. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Deng, Xiaogang; Zhang, Lilun [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Fang, Jianbin [Parallel and Distributed Systems Group, Delft University of Technology, Delft 2628CD (Netherlands); Wang, Guangxue; Jiang, Yi [State Key Laboratory of Aerodynamics, P.O. Box 211, Mianyang 621000 (China); Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua [College of Computer Science, National University of Defense Technology, Changsha 410073 (China)

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  11. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    International Nuclear Information System (INIS)

    Xu, Chuanfu; Deng, Xiaogang; Zhang, Lilun; Fang, Jianbin; Wang, Guangxue; Jiang, Yi; Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua

    2014-01-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  12. A PERCEPÇÃO DA GESTÃO DO CONHECIMENTO NAS EMPRESAS EXPORTADORAS DA AMREC

    Directory of Open Access Journals (Sweden)

    Julio Cesar Zilli

    2014-06-01

    Full Text Available Com a globalização e a era da tecnologia as empresas estão cada vez mais se utilizando do capital intelectual, que trata diretamente do conhecimento e habilidades exercitadas pelos seus colaboradores, para atuar nas atividades relacionadas ao mercado interno ou internacional. Diante do exposto, o presente estudo tem como objetivo identificar a percepção dos gestores de comércio exterior perante a Gestão do Conhecimento (GC nas empresas exportadoras da Associação dos Municípios da Região Carbonífera (AMREC. Quanto aos fins à pesquisa enquadrou-se como descritiva e quanto aos meios de investigação foi classificada como bibliográfica e de campo. A amostra foi composta por 10 empresas exportadoras que mantiveram relacionamento comercial com o mercado externo no período de janeiro a dezembro de 2012. Para a coleta de dados utilizou-se um questionário com abordagem quantitativa para conhecer a percepção dos gestores de comercio exterior em relação à identificação, criação, armazenagem, compartilhamento e utilização da Gestão do Conhecimento. Percebe-se uma sinergia desfavorável por parte dos gestores e da organização ao acompanhamento e implantação da prática de GC. Algumas barreiras como motivação e compartilhamento, relações interpessoais, apoio da estrutura e cultura organizacional estão presentes nas empresas. Para o desenvolvimento das atividades voltadas ao mercado internacional essas barreiras devem ser trabalhadas em conjunto, resultando no emprego benéfico das cinco dimensões da GC: identificação, criação, armazenagem, compartilhamento e utilização.

  13. Coeficiente de transferência de carga nas fundações de silos verticais cilíndricos

    Directory of Open Access Journals (Sweden)

    Marivone Z. Fank

    2015-09-01

    Full Text Available RESUMOO dimensionamento das estruturas armazenadoras de grãos carece de uma norma brasileira que prescreva sobre seus projetos e ações; contudo, existem muitas lacunas no estado atual do conhecimento sendo imprescindíveis pesquisas adicionais sobre o tema. Com o objetivo de determinar a distribuição das cargas nas fundações dos silos foram instrumentadas, por meio de células de carga, quatro estacas localizadas sob o anel de um silo protótipo. O experimento ocorreu durante o período de agosto a dezembro de 2009 em Palotina, PR. As leituras das células foram realizadas por sistema automático de aquisição de dados durante o carregamento de grãos de milho e, a partir dos resultados, pode-se destacar um coeficiente de transferência médio de 0,30 para o anel até o carregamento de 44% do silo, a partir do qual ocorreu um incremento na taxa de transferência. As cargas máximas atuantes nas estacas instrumentadas foram de 800, 845, 520 e 600 kN, correspondentes a coeficientes de transferência de 0,48; 0,51; 0,31 e 0,36, respectivamente. Assim, o coeficiente regionalmente adotado de 0,30 para o dimensionamento das fundações do anel está subestimado fazendo-se necessária uma análise mais criteriosa nas taxas de transferência.

  14. Análise in sílico de proteínas relacionadas a sementes e identificação de microssatélites através da bioinformática

    OpenAIRE

    MENEGHELLO, Geri Eduardo

    2007-01-01

    Dentre os principais componentes de reserva de uma semente destacam-se os carboidratos, os lipídeos e as proteínas. Além da função nutritiva, as proteínas têm diversas funções importantes nas sementes, sendo integrantes da biologia molecular da planta. O conhecimento detalhado das proteínas possibilita que sejam adotadas estratégias de melhoramento genético visando aumento de produção, resistência a patógenos, etc... Avanços na biologia molecular, genômica e proteômica impulsio...

  15. Fast methods for long-range interactions in complex systems. Lecture notes

    Energy Technology Data Exchange (ETDEWEB)

    Sutmann, Godehard; Gibbon, Paul; Lippert, Thomas (eds.)

    2011-10-13

    Parallel computing and computer simulations of complex particle systems including charges have an ever increasing impact in a broad range of fields in the physical sciences, e.g. in astrophysics, statistical physics, plasma physics, material sciences, physical chemistry, and biophysics. The present summer school, funded by the German Heraeus-Foundation, took place at the Juelich Supercomputing Centre from 6 - 10 September 2010. The focus was on providing an introduction and overview over different methods, algorithms and new trends for the computational treatment of long-range interactions in particle systems. The Lecture Notes contain an introduction into particle simulation, as well as five different fast methods, i.e. the Fast Multipole Method, Barnes-Hut Tree Method, Multigrid, FFT based methods, and Fast Summation using the non-equidistant FFT. In addition to introducing the methods, efficient parallelization of the methods is presented in detail. This publication was edited at the Juelich Supercomputing Centre (JSC) which is an integral part of the Institute for Advanced Simulation (IAS). The IAS combines the Juelich simulation sciences and the supercomputer facility in one organizational unit. It includes those parts of the scientific institutes at Forschungszentrum Juelich which use simulation on supercomputers as their main research methodology. (orig.)

  16. Fast methods for long-range interactions in complex systems. Lecture notes

    International Nuclear Information System (INIS)

    Sutmann, Godehard; Gibbon, Paul; Lippert, Thomas

    2011-01-01

    Parallel computing and computer simulations of complex particle systems including charges have an ever increasing impact in a broad range of fields in the physical sciences, e.g. in astrophysics, statistical physics, plasma physics, material sciences, physical chemistry, and biophysics. The present summer school, funded by the German Heraeus-Foundation, took place at the Juelich Supercomputing Centre from 6 - 10 September 2010. The focus was on providing an introduction and overview over different methods, algorithms and new trends for the computational treatment of long-range interactions in particle systems. The Lecture Notes contain an introduction into particle simulation, as well as five different fast methods, i.e. the Fast Multipole Method, Barnes-Hut Tree Method, Multigrid, FFT based methods, and Fast Summation using the non-equidistant FFT. In addition to introducing the methods, efficient parallelization of the methods is presented in detail. This publication was edited at the Juelich Supercomputing Centre (JSC) which is an integral part of the Institute for Advanced Simulation (IAS). The IAS combines the Juelich simulation sciences and the supercomputer facility in one organizational unit. It includes those parts of the scientific institutes at Forschungszentrum Juelich which use simulation on supercomputers as their main research methodology. (orig.)

  17. Enhancement of photoluminescence from GaInNAsSb quantum wells upon annealing: improvement of material quality and carrier collection by the quantum well

    International Nuclear Information System (INIS)

    Baranowski, M; Kudrawiec, R; Latkowska, M; Syperek, M; Misiewicz, J; Sarmiento, T; Harris, J S

    2013-01-01

    In this study we apply time resolved photoluminescence and contactless electroreflectance to study the carrier collection efficiency of a GaInNAsSb/GaAs quantum well (QW). We show that the enhancement of photoluminescence from GaInNAsSb quantum wells annealed at different temperatures originates not only from (i) the improvement of the optical quality of the GaInNAsSb material (i.e., removal of point defects, which are the source of nonradiative recombination) but it is also affected by (ii) the improvement of carrier collection by the QW region. The total PL efficiency is the product of these two factors, for which the optimal annealing temperatures are found to be ∼700 °C and ∼760 °C, respectively, whereas the optimal annealing temperature for the integrated PL intensity is found to be between the two temperatures and equals ∼720 °C. We connect the variation of the carrier collection efficiency with the modification of the band bending conditions in the investigated structure due to the Fermi level shift in the GaInNAsSb layer after annealing.

  18. O ensino de literatura brasileira nas escolas: uma ferramenta para a mudança social

    Directory of Open Access Journals (Sweden)

    Gustavo Zambrano

    2015-08-01

    Full Text Available Esse trabalho tem como objetivo realizar um detalhamento aprofundado das tensões externas que comprometem a educação no Brasil. Essas tensões dizem respeito ao golpe militar que inibiu melhorias no setor educacional, ao estudo exclusivo nas escolas de obras consideradas canônicas e ao estudo conteudista de literatura exigido pelos exames vestibulares. Esse é um debate importante, pois o ensino atual de literatura nas escolas impede a formação de estudantes capazes de analisar e interpretar um texto literário e de compreender o contexto histórico e sociológico de um país. Iremos, portanto, detalhar esses problemas e apresentaremos como o ensino de literatura brasileira pode ser importante para a percepção por parte dos alunos dos problemas sociais e consequentemente criar estudantes críticos. Palavras-Chave: mudança social, cânone, vestibular, ensino de literatura brasileira

  19. Mudanças nas trajetórias de vida e identidades de mulheres na contemporaneidade

    Directory of Open Access Journals (Sweden)

    Carolina de Campos Borges

    2013-03-01

    Full Text Available Este artigo apresenta os resultados de uma pesquisa realizada no Rio de Janeiro e discute mudanças nas trajetórias de vida de mulheres das classes médias nas últimas décadas. Foram entrevistadas sobre seus projetos de vida dez mulheres pertencentes a duas gerações. Todas as entrevistas foram gravadas e transcritas na íntegra. Os textos resultantes das transcrições foram submetidos a uma análise de discurso. O estudo indicou que o aprofundamento do individualismo na vida social contemporânea vem alterando os projetos de vida dos indivíduos. As trajetórias de vida das mulheres são hoje menos padronizadas; trabalho, profissão e independência financeira são temas que têm ganhado relevância nos seus projetos. Nesse contexto, a identidade feminina é cada vez menos influenciada pelos papéis familiares tradicionais.

  20. Formação humana e competências: o debate nas diretrizes curriculares de psicologia

    Directory of Open Access Journals (Sweden)

    Vinicius Cesca de Lima

    2014-12-01

    Full Text Available O artigo analisa as perspectivas de formação humana e de desenvolvimento de competências identificadas nas Diretrizes Curriculares Nacionais para os Cursos de Graduação em Psicologia. A partir da crítica aos processos modernos de formação humana, realizada pela Teoria Crítica da Sociedade, analisamos sua institucionalização em práticas escolares, incluindo a pedagogia por competências, e mais especificamente na educação universitária. Por fim, discutimos essas questões na formação de psicólogos. Confrontando as propostas de formação para a emancipação e de formação para o desenvolvimento de competências, apontamos, a partir da matriz pautada nas Diretrizes Curriculares, que a formação em psicologia representa um processo contraditório que evidencia um projeto em disputa.

  1. Emulsiones alimentarias aceite-en-agua estabilizadas con proteínas de atún

    Directory of Open Access Journals (Sweden)

    Ruiz-Márquez, D.

    2010-12-01

    Full Text Available This work is focused on the development of o/w salad dressing-type emulsions stabilized by tuna proteins. The influence of protein conservation methods after the extraction process (freezing or liofilization on the rheological properties and microstructure of these emulsions was analyzed. Processing variables during emulsification were also evaluated. Stable emulsions with adequate rheological and microstructural characteristics were prepared using 70% oil and 0.50% tuna proteins. From the experimental results obtained, we may conclude that emulsion rheological properties are not significantly affected by the protein conservation method selected. On the contrary, an increase in homogenization speed favours an increase in the values of the linear viscoelastic functions. Less significant is the fact that as agitation speed increases further, mean droplet size steadily decreases.

    El presente trabajo se ha centrado en el desarrollo de emulsiones alimentarias aceite-en-agua estabilizadas con proteínas de atún. Específicamente, se ha analizado la influencia del método de conservación de las proteínas aisladas (liofilización, congelación y de las condiciones de procesado seleccionadas sobre el comportamiento reológico y la microestructura de dichas emulsiones. Se han preparado emulsiones aceite en agua (con un contenido del 70% en peso de aceite estabilizadas con proteínas de atún. La concentración de emulsionante usada ha sido 0,50% en peso. El comportamiento reológico de estas emulsiones no depende significativamente del método de conservación de la proteína empleado. Por otra parte, un aumento de la velocidad de agitación durante el proceso de manufactura de la emulsión da lugar a una disminución continua del tamaño medio de gota y a un aumento de las funciones viscoelásticas dinámicas, menos significativo a medida que aumenta dicha velocidad de agitación.

  2. Lampião da esquina: lutas feministas nas páginas do "Jornal Gay", luzes em tempos sombrios (Brasil, 1978-1981)

    OpenAIRE

    Silva, Daniel Henrique de Oliveira

    2016-01-01

    Quais as representações do feminino construídas e veiculadas nas páginas de um jornal escrito por editores gays? Tal questionamento orienta uma pesquisa que busca entender como homossexuais masculinos, sujeitos inferiorizados e estigmatizados socialmente, deram visibilidade para outro grupo marginalizado, nesse caso as mulheres, nas páginas de um jornal considerado marginal, o Lampião da Esquina, que surgiu em 1978, anos finais da ditadura civil – militar. Nessa pesquisa, buscou-se contextual...

  3. Arte ou artefato? Agência e significado nas artes indígenas

    Directory of Open Access Journals (Sweden)

    Els Lagrou

    2016-11-01

    Um outro aspecto interessante, que sobressai nas duas contribuições, é a inter-relação entre a Antropologia da arte e Antropologia das coisas ou objetos. Pensar práticas e objetos artísticos sob a perspectiva antropológica significa desvendar relações sociais e intencionalidades neles condensados ou por eles transmitidos – ponto que, coincidentemente, está presente em outras seções desse número da Proa, inclusive na Galeria

  4. Efectos adversos de la acumulación renal de hemoproteínas. Nuevas herramientas terapéuticas

    Directory of Open Access Journals (Sweden)

    Melania Guerrero-Hue

    2018-01-01

    Full Text Available La hemoglobina y la mioglobina son hemoproteínas que juegan un papel fundamental en el organismo ya que participan en el transporte de oxígeno. Sin embargo, debido a su estructura química, estas moléculas pueden ejercer efectos deletéreos cuando se liberan al torrente sanguíneo de forma masiva, como sucede en determinadas condiciones patológicas asociadas a rabdomiólisis o hemólisis intravascular. Una vez en el plasma, estas hemoproteínas se pueden filtrar y acumular en el riñón, donde resultan citotóxicas, principalmente para el epitelio tubular, e inducen fracaso renal agudo y enfermedad renal crónica. En la presente revisión analizaremos los distintos contextos patológicos que provocan la acumulación renal de estas hemoproteínas, su relación con la pérdida de función renal a corto y largo plazo, los mecanismos fisiopatólogicos responsables de sus efectos adversos y los sistemas de defensa que contrarrestan tales acciones. Por último, describiremos los distintos tratamientos utilizados actualmente y mostraremos nuevas opciones terapéuticas basadas en la identificación de nuevas dianas celulares y moleculares, prestando especial atención a los diversos ensayos clínicos que se encuentran en marcha en la actualidad.

  5. [Inventive activity of the Department of Metabolism Regulation of the Palladin Institute of Biochemistry of NAS of Ukraine].

    Science.gov (United States)

    Danilova, V M; Vynogradova, R P; Chernysh, I G; Petrenko, T M

    2016-01-01

    The article is devoted to the inventive activity of the Department of Metabolism Regulation of the Palladin Institute of Biochemistry of NAS of Ukraine in the context of the history of its inception, development and the research activities of its founder, academician of NAS of Ukraine M. F. Guly as well as his students and followers. It briefly tells about practical achievements of M. F. Guly which were as significant, immense and diverse as his scientific accomplishments. The paper analyses in detail the practical results of scientific research of his students and followers aimed to solve practical problems of medicine, food-processing, agriculture, and which are essentially a continuation of the ideas and projects of M. F. Guly.

  6. As proteínas HBP/SOUL : sobre-expressão, purificação e estudo de mutantes

    OpenAIRE

    Aveiro, Susana Seabra

    2009-01-01

    O objectivo deste trabalho consistiu na optimização da purificação das proteínas HBP/SOUL que ligam ao grupo Hemo, bem como o estudo da interacção destas proteínas, e seus variantes, com grupos tetrapirrólicos. Na optimização do processo de purificação, a lise celular foi uma das etapas estudadas, comparando-se a eficácia da lise celular através do uso de sonicação e alta pressão. Esta avaliação foi feita por comparação dos géis SDS-PAGE obtidos a partir dos extractos celulares. Para além ...

  7. Unmanned Aircraft Systems (UAS) Integration in the National Airspace System (NAS) Project FY16 Annual Review

    Science.gov (United States)

    Grindle, Laurie; Hackenberg, Davis

    2016-01-01

    This presentation gives insight into the research activities and efforts being executed in order to integrate unmanned aircraft systems into the national airspace system. This briefing is to inform others of the UAS-NAS FY16 progress and future directions.

  8. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to

  9. Multivariate control charts based on net analyte signal (NAS) and Raman spectroscopy for quality control of carbamazepine

    Energy Technology Data Exchange (ETDEWEB)

    Rocha, Werickson Fortunato de Carvalho [Institute of Chemistry, University of Campinas - UNICAMP, P.O. Box 6154, 13083-970 Campinas, SP (Brazil); National Institute of Metrology, Standardization and Industrial Quality, Inmetro, Dimci/Dquim - Directorate of Metrology, Science and Industry/Division of Chemical Metrology, Av. Nossa Senhora das Gracas 50, Building 6, 25250-020, Xerem, Duque de Caxias, RJ (Brazil); Poppi, Ronei Jesus, E-mail: ronei@iqm.unicamp.br [Institute of Chemistry, University of Campinas - UNICAMP, P.O. Box 6154, 13083-970 Campinas, SP (Brazil); National Institute of Science and Technology (INCT) for Bioanalytics, 13083-970 Campinas, SP (Brazil)

    2011-10-31

    Raman spectroscopy and control charts based on the net analyte signal (NAS) were applied to polymorphic characterization of carbamazepine. Carbamazepine presents four polymorphic forms: I-IV (dihydrate). X-ray powder diffraction was used as a reference technique. The control charts were built generating three charts: the NAS chart that corresponds to the analyte of interest (form III in this case), the interference chart that corresponds to the contribution of other compounds in the sample and the residual chart that corresponds to nonsystematic variations. For each chart, statistical limits were developed using samples within the quality specifications. It was possible to identify the different polymorphic forms of carbamazepine present in pharmaceutical formulations. Thus, an alternative method for the quality monitoring of the carbamazepine polymorphic forms after the crystallization process is presented.

  10. Multivariate control charts based on net analyte signal (NAS) and Raman spectroscopy for quality control of carbamazepine

    International Nuclear Information System (INIS)

    Rocha, Werickson Fortunato de Carvalho; Poppi, Ronei Jesus

    2011-01-01

    Raman spectroscopy and control charts based on the net analyte signal (NAS) were applied to polymorphic characterization of carbamazepine. Carbamazepine presents four polymorphic forms: I-IV (dihydrate). X-ray powder diffraction was used as a reference technique. The control charts were built generating three charts: the NAS chart that corresponds to the analyte of interest (form III in this case), the interference chart that corresponds to the contribution of other compounds in the sample and the residual chart that corresponds to nonsystematic variations. For each chart, statistical limits were developed using samples within the quality specifications. It was possible to identify the different polymorphic forms of carbamazepine present in pharmaceutical formulations. Thus, an alternative method for the quality monitoring of the carbamazepine polymorphic forms after the crystallization process is presented.

  11. O assédio moral na trajetória profissional de mulheres gerentes: evidências nas histórias de vida

    OpenAIRE

    Alessandra Morgado Horta Correa

    2004-01-01

    O assédio moral é um tema que vem ganhando espaço nas discussões da sociedade brasileira, na academia e também nas organizações, com reportagens e denúncias veiculadas na imprensa. O contexto organizacional pautado na produtividade e competitividade demanda modernas políticas de gestão e novo perfil do trabalhador, que, aliado ao desemprego e à exclusão social, favorece um ambiente de autoritarismo, submissão e disciplina, gerando nos trabalhadores estresse, instabilidade emocional, inseguran...

  12. Effects of growth rate on structural property and adatom migration behaviors for growth of GaInNAs/GaAs (001) by molecular beam epitaxy

    Science.gov (United States)

    Li, Jingling; Gao, Peng; Zhang, Shuguang; Wen, Lei; Gao, Fangliang; Li, Guoqiang

    2018-03-01

    We have investigated the structural properties and the growth mode of GaInNAs films prepared at different growth rates (Rg) by molecular beam epitaxy. The crystalline structure is studied by high resolution X-ray diffraction, and the evolution of GaInNAs film surface morphologies is studied by atomic force microscopy. It is found that both the crystallinity and the surface roughness are improved by increasing Rg, and the change in the growth mode is attributed to the adatom migration behaviors particularly for In atoms, which is verified by elemental analysis. In addition, we have presented some theoretical calculation results related to the N adsorption energy to show the unique N migration behavior, which is instructive to interpret the growth mechanism of GaInNAs films.

  13. POSSIBILIDADES DO DESENVOLVIMENTO DO TURISMO ÉTNICO NAS COMUNIDADES QUILOMBOLAS DE DIAMANTINA/MG: OPORTUNIDADES E DESAFIOS

    Directory of Open Access Journals (Sweden)

    Elcione Luciana da Silva

    2016-04-01

    Full Text Available O conceito do turismo étnico vem se desenvolvendo nas últimas décadas e envolve a valorização cultural e possibilidade de promover diversas experiências e inter-relações entre visitantes e comunidades quilombolas. Este artigo versa sobre as comunidades quilombolas (Mata dos Crioulos, Vargem do Inhaí e Quartel de Indaiá – Diamantina/MG, que vêm sofrendo conflitos e pressões de ordem territorial, o que dificulta ainda mais a manutenção da cultura local. O objetivo desta pesquisa é sugerir o desenvolvimento do turismo étnico nas mesmas como forma de desenvolvimento local e valorização do patrimônio cultural material e imaterial. Para alcançar o resultado da pesquisa foram utilizadas referências bibliográficas e documentais. Palavra-chaves: Turismo étnico; comunidades quilombolas; conflitos ambientais e territoriais.

  14. [Darius Staliūnas. Making Russians : meaning and practice of russification in Lithuania and Belarus after 1863

    Index Scriptorium Estoniae

    Woodworth, Bradley D., 1963-

    2011-01-01

    Arvustus: Darius Staliūnas. Making Russians. Meaning and practice of russification in Lithuania and Belarus after 1863. On the boundary of two worlds: identity, freedom, and moral imagination in the Baltica, 11. (Amsterdam : Rodopi, 2007)

  15. O ensino e a experiência nas narrativas de professores de Inglês

    Directory of Open Access Journals (Sweden)

    Annallena de Souza Guedes

    2016-09-01

    Full Text Available Resumo: Este trabalho tem como objetivo analisar três narrativas de professores de Inglês em exercício, através das quais são reveladas experiências que dizem respeito ao processo de ensinar Inglês em contextos de instituições públicas no Brasil. Pautado no conceito de experiência (MICCOLI, 2010, buscamos compreender quais experiências emergem dessas narrativas e como elas influenciam na prática de ensino dos professores. Ademais, intentamos analisar como os professores de Inglês se veem e quais desafios ele enfrentam nos seus contextos de trabalho. Os resultados desse estudo mostraram que, apesar de todas as experiências de dificuldades e indisciplina reveladas nas narrativas, duas delas parecem ainda encontrar motivação e esperança quanto à sua profissão. Além disso, percebemos que o modo como os professores veem sua realidade, seus alunos e seu trabalho são importantes para caracterizar sua prática profissional. Assim, acreditamos que o contexto e as experiências retratados nas narrativas podem ser caminhos que direcionem as ações desses professores em sala de aula e, consequentemente, possibilitem reflexões e mudanças no seu papel como educador. 

  16. Uma outra ideia da Índia. As literaturas nas línguas Bhashas

    Directory of Open Access Journals (Sweden)

    Cielo Griselda Festino

    2013-04-01

    Full Text Available http://dx.doi.org/10.5007/2175-7968.2013v1n31p103 O objetivo deste artigo é discutir as narrativas indianas nas línguas bhashas, as línguas vernáculas do subcontinente indiano a través de uma política e poética da tradução que da voz e visibilidade a culturas que, de outro modo, estariam restringidas às diversas culturas onde são produzidas. Dessa maneira, não só as literaturas do “front yard”, ou seja as narrativas indianas escritas na língua inglesa da diáspora ganham visibilidade, mas também as narrativas escritas do “backyard” nas línguas vernáculas  da Índia. Nesse processo, o termo “vernáculo” ganha um novo significado no sentido que o que é realmente “vernacularizado” é a língua  inglesa porque se torna um veículo por meio do que as literaturas bhasha tornam-se conhecidas. Para ilustrar esse processo o artigo traz uma análise do conto “Thayyaal” escrito na língua Tamil, do sul da Índia.

  17. Inventive activity of the Department of Metabolism Regulation of the Palladin Institute of Biochemistry of NAS of Ukraine

    Directory of Open Access Journals (Sweden)

    V. M. Danilova

    2016-12-01

    Full Text Available The article is devoted to the inventive activity of the Department of Metabolism Regulation of the Palladin Institute of Biochemistry of NAS of Ukraine in the context of the history of its inception, development and the research activities of its founder, academician of NAS of Ukraine M. F. Guly as well as his students and followers. It briefly tells about practical achievements of M. F. Guly which were as significant, immense and diverse as his scientific accomplishments. The paper analyses in detail the practical results of scientific research of his students and followers aimed to solve practical problems of medicine, food-processing, agriculture, and which are essentially a continuation of the ideas and projects of M. F. Guly.

  18. O planejamento do "Recreio nas Férias" na cidade Paulista de Americana Planning for the program "leisure during the school break" ("Recreio nas Férias" in the brazilian city of Americana, state of São Paulo

    Directory of Open Access Journals (Sweden)

    Nayara Torre de Almeida

    2012-06-01

    Full Text Available O presente artigo constitui um relato de experiência do processo de planejamento e implementação do projeto "Recreio nas Férias" em um dos núcleos do Programa Segundo Tempo, localizado no bairro Vila Jones na cidade Paulista de Americana. O Projeto "Recreio nas Férias" é uma iniciativa do Ministério do Esporte, implantada em 2009 com o objetivo de, no período de férias escolares, oferecer às crianças e adolescentes participantes do programa, opções de esporte e lazer (e.g., atividades lúdicas, esportivas, artísticas, culturais, sociais e turísticas que "preencham o seu tempo livre de forma prazerosa e ao mesmo tempo construtiva" (BRASIL, 2010. Confrontando tais objetivos com os estudos mais atuais no campo do lazer é possível perceber que, no campo das políticas públicas, ainda permanece a visão histórica que tende a tratar o lazer como ocupação do tempo ocioso, principalmente quando se trata das propostas voltadas às classes populares. Isso fica evidente na preocupação explícita de "preencher o tempo livre", enfatizando seu caráter ocupacional. Nesse sentido, tecemos algumas aproximações com estudos de Silva (2003, 2008 e Marcellino (1995, 2001, 2008 compreendendo o lazer como uma das possibilidades de vivência do lúdico e construção da cidadania, ou seja, buscando responder a seguinte questão: pode o lazer assumir outro papel, senão este "ocupacional" historicamente observado nas políticas de esporte e lazer voltadas às classes populares?This study reports the planning and the implementation of the program "Recreio nas Férias" in one of the centers of the program "Second Half" ("Programa Segundo Tempo", located in Vila Jones in the Brazilian city of Americana, State of São Paulo. The program "Leisure During the School Break" ("Recreio nas Férias" is a project of the Ministry of Sports, lounged in 2009. The aim was, during the school break, to provide children and adolescents with sports and leisure

  19. Proteinograma sérico, com ênfase em proteínas de fase aguda, de bovinos sadios e bovinos portadores de enfermidade aguda de ocorrência natural

    OpenAIRE

    Simplício,K.M.M.G.; Sousa,F.C.; Fagliari,J.J.; Silva,P.C.

    2013-01-01

    Nas últimas décadas, as proteínas de fase aguda (PFAs) tornaram-se biomarcadores de escolha em medicina humana para identificação e monitoração de doenças. Não há razão para imaginar que tais pesquisas clínicas não sejam igualmente úteis na medicina veterinária. Com o objetivo de verificar a importância das PFAs como biomarcadores de doenças inflamatórias em bovinos, determinou-se o proteinograma sérico, por meio da técnica de eletroforese SDS-PAGE, com interesse especial nas PFAs. Foram util...

  20. The Erasmus Computing Grid - Building a Super-Computer for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2007-01-01

    textabstractToday advances in scientific research as well as clinical diagnostics and treatment are inevitably connected with information solutions concerning computation power and information storage. The needs for information technology are enormous and are in many cases the limiting

  1. Consumidoras e heroínas: gênero na telenovela

    Directory of Open Access Journals (Sweden)

    Heloisa Buarque de Almeida

    2007-01-01

    Full Text Available http://dx.doi.org/10.1590/S0104-026X2007000100011 Este trabalho explora as correlações entre telenovela, consumo e gênero, buscando compreender como a mídia está articulada à promoção de bens e da cultura do consumo, e como gênero é um eixo importante em tal articulação. A pesquisa foi feita a partir de um estudo etnográfico de recepção de novelas, e se desdobra na análise da relação entre televisão e publicidade, discutindo a feminilização do consumo e a construção de certa imagem feminina hegemônica nas novelas e nos anúncios comerciais.

  2. High-Performance All-Solid-State Na-S Battery Enabled by Casting-Annealing Technology.

    Science.gov (United States)

    Fan, Xiulin; Yue, Jie; Han, Fudong; Chen, Ji; Deng, Tao; Zhou, Xiuquan; Hou, Singyuk; Wang, Chunsheng

    2018-04-24

    Room-temperature all-solid-state Na-S batteries (ASNSBs) using sulfide solid electrolytes are a promising next-generation battery technology due to the high energy, enhanced safety, and earth abundant resources of both sodium and sulfur. Currently, the sulfide electrolyte ASNSBs are fabricated by a simple cold-pressing process leaving with high residential stress. Even worse, the large volume change of S/Na 2 S during charge/discharge cycles induces additional stress, seriously weakening the less-contacted interfaces among the solid electrolyte, active materials, and the electron conductive agent that are formed in the cold-pressing process. The high and continuous increase of the interface resistance hindered its practical application. Herein, we significantly reduce the interface resistance and eliminate the residential stress in Na 2 S cathodes by fabricating Na 2 S-Na 3 PS 4 -CMK-3 nanocomposites using melting-casting followed by stress-release annealing-precipitation process. The casting-annealing process guarantees the close contact between the Na 3 PS 4 solid electrolyte and the CMK-3 mesoporous carbon in mixed ionic/electronic conductive matrix, while the in situ precipitated Na 2 S active species from the solid electrolyte during the annealing process guarantees the interfacial contact among these three subcomponents without residential stress, which greatly reduces the interfacial resistance and enhances the electrochemical performance. The in situ synthesized Na 2 S-Na 3 PS 4 -CMK-3 composite cathode delivers a stable and highly reversible capacity of 810 mAh/g at 50 mA/g for 50 cycles at 60 °C. The present casting-annealing strategy should provide opportunities for the advancement of mechanically robust and high-performance next-generation ASNSBs.

  3. Proteínas inmunodominantes de Brucella Melitensis evaluadas por Western Blot

    Directory of Open Access Journals (Sweden)

    Elizabeth Anaya

    1997-01-01

    Full Text Available Se separaron extractos de proteínas totales de Brucella melitensis en gel 15% SDS-PAGE. Su seroreactividad fue analizada por Western Blot con resultados satisfactorios. Para éste propósito sueros controles negativos (n=03, sueros de pacientes con brucelosis (n=34, cólera (n=12, tifoidea (n=02 y tuberculosis (n=02 fueron usados. Esta prueba inmunodiagnóstica detectó bandas seroreactivas altamente específicas (100% correspondientes a 8,14,18, un complejo de 25-48 y 58kDa. La sensibilidad del test fue del 90% usando los sueros antes mencionados.

  4. Apontamentos acerca dos métodos de pesquisa nas ciências sociais (Research methods in social sciences

    Directory of Open Access Journals (Sweden)

    Jussara Ayres Bourguignon

    2011-01-01

    Full Text Available Resumo: Este texto apresenta apontamentos acerca do método de pesquisa nas Ciências Sociais. Para tanto, apoiamo-nos em três clássicos do pensamento moderno: Émile Durkheim, Max Weber e Karl Marx. Durkheim, com seu método comparativo, baseado nas ideias positivistas; Weber, com seu método compreensivo; e Marx, com seu método materialista dialético, influenciaram e ainda influenciam as pesquisas nas áreas das Ciências Sociais. Ao final, refletimos acerca da concepção de metodologia de pesquisa, baseada em Minayo (2007, considerando que, no processo de investigação, a metodologia é construção decorrente da relação sujeito/objeto, orientada por fundamentos teóricos claros e precisos.Abstract: This article presents discusses research methods in the Social Sciences. To this end, the work of authors such as Émile Durkheim, Max Weber and Karl Marx who discuss the modern thought are considered. Durkheim with his comparative method based on positivistic ideas; Weber with his “understanding” method; and Marx with his materialistic dialectical method. These authors have infl uenced research in the Social Sciences. The article also refl ects about the concept of research methodology based on Minayo (2007 that considers that in the process of investigation the methodology derives from the relation between subject and object guided by clear and precise theoretical support.

  5. NREL Research Earns Two Prestigious R&D 100 Awards | News | NREL

    Science.gov (United States)

    and development not only create jobs in America but help advance the goal of a clean energy future and Steve Johnston. High Performance Supercomputing Platform Uses Warm Water to Prevent Heat Build-up initiative were NREL's Steve Hammond and Nicolas Dube of HP. "Oscars" of Innovation Winners of the

  6. Resonance – Journal of Science Education | Indian Academy of ...

    Indian Academy of Sciences (India)

    Jatan K Modi1 Sachin P Nanavati2 Amit S Phadke1 Prasanta K Panigrahi3. Dharmsinh Desai Institute of Technology, Nadiad 387001, India. National PARAM Supercomputing FaCility, Centre for Development of Advanced Computing (C-DACI. Pune University Campus, Ganesh Khind, Pune 411 007, India. Physical ...

  7. UAS Integration Into the NAS: An Examination of Baseline Compliance in the Current Airspace System

    Science.gov (United States)

    Fern, Lisa; Kenny, Caitlin A.; Shively, Robert J.; Johnson, Walter

    2012-01-01

    As a result of the FAA Modernization and Reform Act of 2012, Unmanned Aerial Systems (UAS) are expected to be integrated into the National Airspace System (NAS) by 2015. Several human factors challenges need to be addressed before UAS can safely and routinely fly in the NAS with manned aircraft. Perhaps the most significant challenge is for the UAS to be non-disruptive to the air traffic management system. Another human factors challenge is how to provide UAS pilots with intuitive traffic information in order to support situation awareness (SA) of their airspace environment as well as a see-and-avoid capability comparable to manned aircraft so that a UAS pilot could safely maneuver the aircraft to maintain separation and collision avoidance if necessary. A simulation experiment was conducted to examine baseline compliance of UAS operations in the current airspace system. Researchers also examined the effects of introducing a Cockpit Situation Display (CSD) into a UAS Ground Control Station (GCS) on UAS pilot performance, workload and situation awareness while flying in a positively controlled sector. Pilots were tasked with conducting a highway patrol police mission with a Medium Altitude Long Endurance (MALE) UAS in L.A. Center airspace with two mission objectives: 1) to reroute the UAS when issued new instructions from their commander, and 2) to communicate with Air Traffic Control (ATC) to negotiate flight plan changes and respond to vectoring and altitude change instructions. Objective aircraft separation data, workload ratings, SA data, and subjective ratings regarding UAS operations in the NAS were collected. Results indicate that UAS pilots were able to comply appropriately with ATC instructions. In addition, the introduction of the CSD improved pilot SA and reduced workload associated with UAS and ATC interactions.

  8. Programmable lithography engine (ProLE) grid-type supercomputer and its applications

    Science.gov (United States)

    Petersen, John S.; Maslow, Mark J.; Gerold, David J.; Greenway, Robert T.

    2003-06-01

    There are many variables that can affect lithographic dependent device yield. Because of this, it is not enough to make optical proximity corrections (OPC) based on the mask type, wavelength, lens, illumination-type and coherence. Resist chemistry and physics along with substrate, exposure, and all post-exposure processing must be considered too. Only a holistic approach to finding imaging solutions will accelerate yield and maximize performance. Since experiments are too costly in both time and money, accomplishing this takes massive amounts of accurate simulation capability. Our solution is to create a workbench that has a set of advanced user applications that utilize best-in-class simulator engines for solving litho-related DFM problems using distributive computing. Our product, ProLE (Programmable Lithography Engine), is an integrated system that combines Petersen Advanced Lithography Inc."s (PAL"s) proprietary applications and cluster management software wrapped around commercial software engines, along with optional commercial hardware and software. It uses the most rigorous lithography simulation engines to solve deep sub-wavelength imaging problems accurately and at speeds that are several orders of magnitude faster than current methods. Specifically, ProLE uses full vector thin-mask aerial image models or when needed, full across source 3D electromagnetic field simulation to make accurate aerial image predictions along with calibrated resist models;. The ProLE workstation from Petersen Advanced Lithography, Inc., is the first commercial product that makes it possible to do these intensive calculations at a fraction of a time previously available thus significantly reducing time to market for advance technology devices. In this work, ProLE is introduced, through model comparison to show why vector imaging and rigorous resist models work better than other less rigorous models, then some applications of that use our distributive computing solution are shown

  9. Redshift and blueshift of GaNAs/GaAs multiple quantum wells induced by rapid thermal annealing

    Science.gov (United States)

    Sun, Yijun; Cheng, Zhiyuan; Zhou, Qiang; Sun, Ying; Sun, Jiabao; Liu, Yanhua; Wang, Meifang; Cao, Zhen; Ye, Zhi; Xu, Mingsheng; Ding, Yong; Chen, Peng; Heuken, Michael; Egawa, Takashi

    2018-02-01

    The effects of rapid thermal annealing (RTA) on the optical properties of GaNAs/GaAs multiple quantum wells (MQWs) grown by chemical beam epitaxy (CBE) are studied by photoluminescence (PL) at 77 K. The results show that the optical quality of the MQWs improves significantly after RTA. With increasing RTA temperature, PL peak energy of the MQWs redshifts below 1023 K, while it blueshifts above 1023 K. Two competitive processes which occur simultaneously during RTA result in redshift at low temperature and blueshift at high temperature. It is also found that PL peak energy shift can be explained neither by nitrogen diffusion out of quantum wells nor by nitrogen reorganization inside quantum wells. PL peak energy shift can be quantitatively explained by a modified recombination coupling model in which redshift nonradiative recombination and blueshift nonradiative recombination coexist. The results obtained have significant implication on the growth and RTA of GaNAs material for high performance optoelectronic device application.

  10. A presença feminina nas (subculturas juvenis: a arte de se tornar visível

    Directory of Open Access Journals (Sweden)

    Wivian Weller

    2005-01-01

    Full Text Available http://dx.doi.org/10.1590/S0104-026X2005000100008 Na produção bibliográfica existente, constata-se uma lacuna no que diz respeito à participação feminina nas (subculturas juvenis. Será que jovens-adolescentes do sexo feminino constituem uma minoria no movimento hip hop ou em outras manifestações culturais como as galeras ou gangues? O presente artigo questiona a ausência de estudos sobre jovensadolescentes do sexo feminino, tanto nos trabalhos sobre juventude como nos estudos feministas, destacando a necessidade de pesquisas voltadas para a compreensão das ações juvenis em seus contextos específicos. Com base em dados empíricos sobre jovens-adolescentes negras e jovens de origem turca pertencentes ao movimento hip hop nas cidades de São Paulo e Berlim, discute ainda a luta pela conquista de espaço e de reconhecimento nesse movimento cultural de predominância masculina.

  11. O oficial e o oficioso: objeto e regulação de conflitos nas Antilhas Francesas (1848-1850

    Directory of Open Access Journals (Sweden)

    Myriam Cottias

    2004-10-01

    Full Text Available A abolição da escravidão pela Governo Provisório da Segunda República francesa, em abril de 1848, redefiniu o espaço público nas colônias pela instauração de uma igualdade dos estatutos civil, político e "racial" entre os cidadãos da República. Este artigo examina as implicações desta decisão no contexto das Antilhas Francesas, em particular no que respeita à atribuição e registro de patrônimos pelos antigos escravos, e nas imbricações entre as novas relações jurídicas de trabalho e as velhas relações de dependência social. Discutem-se em seguida a formação e trajetória dos Júris Cantonais, instituição criada para administrar o novo regime civil e trabalhista nas colônias, bem como as ações civis e penais levadas a cabo pelos agentes neste novo contexto jurídico, ações nas quais se manifestam as aspirações contraditórias dos antigos escravos e de seus antigos senhores.The abolition of slavery by the Provisional Government of the Second French Republic in April 1848 redefined public space in the colonies by establishing a statutory civil, political and 'racial' equality among the citizens of the Republic. This article examines the implications of this decision in the context of the French Antilles, particularly in relation to the attribution and registration of property by past slaves and the interplay between the new juridical relations of work and the old relations of social dependency. The article goes on to discuss the formation and history of the Cantonal Juries, an institution created to administrate the new civil and labour regime in the colonies, as well as the civil and criminal legal actions taken by agents in this new juridical context ­ actions in which the conflicting aspirations of the past slaves and their old masters become clearly evident.

  12. LEITURA: QUADRO CONCEITUAL DA PRÁXIS NAS ORGANIZAÇÕES QUE INOVAM

    Directory of Open Access Journals (Sweden)

    Valdecir Pereira Uved

    2014-12-01

    Full Text Available A escrita dividiu a história da humanidade e revolucionou as formas de produção e transmissão de conhecimentos. Neste contexto, este ensaio conceitual exploratório tem por objetivo analisar a leitura como elemento e práxis já utilizada no dia a dia das organizações inovadoras. Essa atividade, praticamente invisível na rotina organizacional, é entendida como possibilidade de diferenciação para as empresas se tornarem espaços inovadores. O pressuposto de estudo considerou o ambiente das organizações inovadoras. A opção por analisar a leitura e seu processo para o desenvolvimento humano e organizacional apoia-se na carência de estudos que aproximem a leitura como uma prática para a inovação nas organizações. Para isso, inicialmente foi feita a revisão do conceito de inovação, e à sua luz foi analisada, numa perspectiva filosófica, a leitura e seu processo para o desenvolvimento humano e organizacional, em vista da inovação. Foi possível assegurar que, a leitura é uma das práticas que, presente nas organizações, contribui para a inovação. Como oportunidade de pesquisas futuras, ressaltou-se a realização de trabalho exploratório empiricamente alicerçado na análise dos chamados atores individual, coletivo e contextual.

  13. O estilo de reminiscência nas interações mãe-criança e pai-criança

    OpenAIRE

    Rebelo, Ana; Maia, Joana; Gatinho, Ana; Coelho, Leandra; Torres, Nuno; Veríssimo, Manuela

    2016-01-01

    Nas últimas décadas, a investigação na área da reminiscência adulto-criança tem-se debruçado, essencial - mente, sobre a forma como as mães conversam sobre eventos passados com os seus filhos. Contudo, mais recentemente, os investigadores têm procurado compreender a importância do pai no desenvolvi - mento da comunicação infantil. O presente estudo explora as diferenças do estilo de reminiscência nas díades mãe-criança e pai-criança, em função do sexo desta. Participaram 79 crianç...

  14. Quantum oscillations and interference effects in strained n- and p-type modulation doped GaInNAs/GaAs quantum wells

    Science.gov (United States)

    Sarcan, F.; Nutku, F.; Donmez, O.; Kuruoglu, F.; Mutlu, S.; Erol, A.; Yildirim, S.; Arikan, M. C.

    2015-08-01

    We have performed magnetoresistance measurements on n- and p-type modulation doped GaInNAs/GaAs quantum well (QW) structures in both the weak (B  magnetoresistance traces are used to extract the spin coherence, phase coherence and elastic scattering times as well Rashba parameters and spin-splitting energy. The calculated Rashba parameters for nitrogen containing samples reveal that the nitrogen composition is a significant parameter to determine the degree of the spin-orbit interactions. Consequently, GaInNAs-based QW structures with various nitrogen compositions can be beneficial to adjust the spin-orbit coupling strength and may be used as a candidate for spintronics applications.

  15. Cohort Profile : The National Academy of Sciences-National Research Council Twin Registry (NAS-NRC Twin Registry)

    NARCIS (Netherlands)

    Gatz, Margaret; Harris, Jennifer R.; Kaprio, Jaakko; McGue, Matt; Smith, Nicholas L.; Snieder, Harold; Spiro, Avron; Butler, David A.

    The National Academy of Sciences-National Research Council Twin Registry (NAS-NRC Twin Registry) is a comprehensive registry of White male twin pairs born in the USA between 1917 and 1927, both of the twins having served in the military. The purpose was medical research and ultimately improved

  16. Comparación de los patrones electroforéticos de proteínas en extractos de hojas de Senecio niveoaureus Cuatr . en un gradiente altitudinal en el Páramo de Chingaza (Colombia

    Directory of Open Access Journals (Sweden)

    Fagua Alvarez Florez

    2000-01-01

    Full Text Available En los ecosistemas de alta montaña (páramo, las plantas están expuestas a temperaturasbajas en horas nocturnas. En este trabajo se comparan, por primera vez, los patroneselectroforéticos de proteínas de apoplasto en Espeletia killipii, (caulirrósula y de Senecioniveoaureus(acaule. Estas especies presentan principalmente proteínas de pesos moleculares de35 a 11 kDa. Proteínas con pesos moleculares similares se han descrito como proteínasanticongelantes (AFP en peces del ártico y en el apoplasto de plantas de zonas templadas.Algunos extractos analizados presentaron actividad beta 1-3 glucanasa y quitinasa (enzimasrelacionadas con proteínas de protección a temperaturas bajas. Mediante las metodologíasConA-peroxidasa y Schiff, se determinó que algunas proteínas presentes en los extractosapoplásticos tienen carácter glicoproteico.

  17. Advanced 0.3-NA EUV lithography capabilities at the ALS

    International Nuclear Information System (INIS)

    Naulleau, Patrick; Anderson, Erik; Dean, Kim; Denham, Paul; Goldberg, Kenneth A.; Hoef, Brian; Jackson, Keith

    2005-01-01

    For volume nanoelectronics production using Extreme ultraviolet (EUV) lithography [1] to become a reality around the year 2011, advanced EUV research tools are required today. Microfield exposure tools have played a vital role in the early development of EUV lithography [2-4] concentrating on numerical apertures (NA) of 0.2 and smaller. Expected to enter production at the 32-nm node with NAs of 0.25, EUV can no longer rely on these early research tools to provide relevant learning. To overcome this problem, a new generation of microfield exposure tools, operating at an NA of 0.3 have been developed [5-8]. Like their predecessors, these tools trade off field size and speed for greatly reduced complexity. One of these tools is implemented at Lawrence Berkeley National Laboratory's Advanced Light Source synchrotron radiation facility. This tool gets around the problem of the intrinsically high coherence of the synchrotron source [9,10] by using an active illuminator scheme [11]. Here we describe recent printing results obtained from the Berkeley EUV exposure tool. Limited by the availability of ultra-high resolution chemically amplified resists, present resolution limits are approximately 32 nm for equal lines and spaces and 27 nm for semi-isolated lines

  18. O Impacto do project finance nas empresas portuguesas no setor têxtil

    OpenAIRE

    Ribeiro, Sónia Patrícia dos Santos

    2012-01-01

    Dissertação para a obtenção do Grau de Mestre em Contabilidade e Finanças Orientador: Mestre Adalmiro Álvaro Malheiro de Castro Andrade Pereira A presente dissertação desenvolvida no âmbito do Mestrado em Contabilidade e Finanças pretende analisar o impacto do Project Finance nas empresas portuguesas no setor têxtil. O Project Finance é uma forma de financiamento de projetos inovadora, muito utilizada nos Estados Unidos e na Europa e que se aplica essencialmente a projetos de grande esc...

  19. Independent determination of In and N concentrations in GaInNAs alloys

    International Nuclear Information System (INIS)

    Lu, W; Lim, J J; Bull, S; Andrianov, A V; Larkins, E C; Staddon, C; Foxon, C T; Sadeghi, M; Wang, S M; Larsson, A

    2009-01-01

    High-resolution x-ray diffraction (HRXRD) and photoreflectance (PR) spectroscopy were used to independently determine the In and N concentrations in GaInNAs alloys grown by solid-source molecular beam epitaxy (SSMBE). The lattice constant and bandgap energy can be expressed as two independent equations in terms of the In and N concentrations, respectively. The HRXRD measurement provided the lattice constant and the PR measurement extracted the bandgap energy. By simultaneously solving these two equations, we have determined the In and N concentrations with the error as small as 0.001

  20. Photoluminescence and magnetophotoluminescence studies in GaInNAs/GaAs quantum wells

    Science.gov (United States)

    Segura, J.; Garro, N.; Cantarero, A.; Miguel-Sánchez, J.; Guzmán, A.; Hierro, A.

    2007-04-01

    We investigate the effects of electron and hole localization in the emission of a GaInNAs/GaAs single quantum well at low temperatures. Photoluminescence measurements varying the excitation density and under magnetic fields up to 14 T have been carried out. The results indicate that electrons are strongly localized in these systems due to small fluctuations in the nitrogen content of the quaternary alloy. The low linear diamagnetic shift of the emission points out the weakness of the Coulomb correlation between electrons and holes and suggests an additional partial localization of the holes.

  1. Materiales biodegradables en base a proteínas de soja y montmorillonitas

    OpenAIRE

    Echeverría, Ignacio

    2012-01-01

    Entre los biomateriales, las proteínas de soja tienen la capacidad de formar películas comestibles y/o biodegradables. Respecto de los polímeros sintéticos, estas películas proteicas presentan excelentes propiedades barrera a gases, lípidos y aromas; pero comúnmente no muestran propiedades mecánicas y barrera al vapor de agua satisfactorias para aplicaciones prácticas. Con el fin de mejorar la funcionalidad de estas películas, en este trabajo se estudió la obtención de materiales na...

  2. O design gráfico tropicalista e sua repercussão nas capas de disco da década de 1970

    OpenAIRE

    Oliveira, Cauhana Tafarelo de

    2014-01-01

    A pesquisa buscou investigar o Tropicalismo (1967-1968) e analisar de que forma sua linguagem gráfica repercutiu nas capas de disco de 1969 a 1978, ou seja, nos dez anos seguintes ao término oficial do movimento. A dissertação procurou pensar nas características estéticas e culturais presentes nos projetos gráficos, considerando o contexto histórico desses anos, as propostas da canção tropicalista, tal como as influências entre as sonoridades e visualidades do período. Para isso, foram estuda...

  3. A Participação do Brasil nas Operações de Paz: passado, presente e futuro

    Directory of Open Access Journals (Sweden)

    Sergio Luiz Cruz Aguilar

    2015-03-01

    Full Text Available O Brasil tem um histórico de 66 anos de participação em operações de paz e missões políticas especiais da Organização das Nações Unidas e de missões de assistência da Organização dos Estados Americanos (OEA, além da Missão de Observadores Militares Equador-Peru, para as quais enviou observadores militares, policiais, peritos eleitorais, especialistas em saúde, civis e tropas armadas. O presente artigo apresenta como se deu a participação brasileira nas operações de paz, qual a situação atual da presença brasileira nas missões em andamento e discute as motivações e os desafios para o país em relação às tendências dessas missões.

  4. Estratégias persuasivas (Alpha versus Ômega) nas mensagens publicitárias: os efeitos no consumo de álcool pelos jovens

    OpenAIRE

    Marcondes, Luciana Passos

    2014-01-01

    Campanhas de marketing social são desenvolvidas para abordar comportamentos problemáticos de bebida e reduzir o potencial de danos entre os jovens, também para promover o consumo moderado de álcool e um comportamento responsável. Em muitas ocasiões, o início do uso do álcool é incentivado por fatores interpessoais e particulares, incluindo aspectos biológicos, domésticos e ambientais. Assim, ações vêm gerando debates e recebendo atenção especial nas comunidades acadêmica e empresarial nas últ...

  5. Ciclo de produção de cultivares de framboeseiras (Rubus idaeus submetidas à poda drástica nas condições do sul de Minas Gerais

    Directory of Open Access Journals (Sweden)

    Luana Aparecida Castilho Maro

    2012-06-01

    Full Text Available O presente trabalho teve como objetivo avaliar o ciclo de produção de cultivares de framboeseiras submetidas à poda drástica nas condições edafoclimáticas do sul de Minas Gerais. As cultivares escolhidas Batum, Autumn Bliss, Heritage e Golden Bliss foram avaliadas nas condições edafoclimáticas do sul de Minas Gerais desde a poda drástica até a produção e desenvolvimento de frutos nas hastes primárias e secundárias, e nas gemas subapicais. As hastes emitidas após a poda de inverno foram marcadas e avaliadas quanto ao início e término das fases de florescimento e frutificação. Para a determinação da curva de desenvolvimento dos frutos, foram feitas amostragens semanais desde o início da formação do fruto até a colheita. Conclui-se que ocorre a emissão de dois surtos de crescimento de rebentos oriundos do sistema radicular. As cultivares diferem quanto ao ciclo de produção nas hastes primárias e secundárias. As gemas subapicais mostram baixa capacidade de brotação e florescimento. Os frutos das diferentes cultivares apresentam padrão de crescimento sigmoidal simples.

  6. Estado e controle nas prisões

    Directory of Open Access Journals (Sweden)

    Analía Soria Batista

    Full Text Available Este artigo analisa o problema da produção do controle e da ordem em prisões brasileiras, utilizando as perspectivas histórica e sociológica, e levanta a hipóteses de que, no Brasil, convivem duas modalidades de construção da ordem e do controle nas prisões. Uma delas, minoritária, baseia-se na prerrogativa do Estado na gestão do dia a dia prisional. A outra é relativa à negociação da pacificação do presídio entre o Estado e as lideranças dos presos. Embora, no primeiro caso, a prerrogativa do Estado possa ser vinculada às condições institucionais adequadas e, no segundo (negociação entre o estado e as lideranças dos presos às condições precárias dos presídios, como superlotação, número reduzido de agentes penitenciários, entre outros, a análise apontou que ambas as modalidades traduzem formas de relacionamentos e interações sociais historicamente produzidas entre o Estado e a sociedade, que remetem à fundação da República, recriadas através do habitus dos atores sociais, não se restringindo exclusivamente ao espaço social das prisões.

  7. A ciência nas utopias de Campanella, Bacon, Comenius, e Glanvill

    Directory of Open Access Journals (Sweden)

    Bernardo Jefferson de Oliveira

    2002-12-01

    Full Text Available Este artigo analisa comparativamente o papel que a ciência e a técnica ocupam nas sociedades descritas em A cidade do Sol de Tommasio Campanella, a Nova Atlântida de Francis Bacon, Panorthosia de Jan Amós Comenius e o Complemento à Nova Atlântida de Joseph Glanvill.This article evaluates the role that science and technology plays in the societies described by early modern utopias, making a comparative analysis of Tommasio Campanella's City of Sun, Francis Bacon's New Atlantis, Jan Amós Comenius' Panorthosia, and Joseph Glanvill's The summe of my lord Bacon's New Atlantis.

  8. A Gestão Colaborativa da Marca nas Redes Sociais Virtuais

    Directory of Open Access Journals (Sweden)

    Clóvis Reis

    2010-03-01

    Full Text Available A partir do conceito de aprendizagem colaborativa ou cooperativa (processo educacional baseado no trabalho em conjunto, no compartilhamento de informações e na interdependência dos membros do grupo, o presente artigo discute a proposta de gestão colaborativa da marca nas redes sociais virtuais. Na gestão colaborativa, a estratégia de comunicação das empresas evolui do modelo de um a muitos para o modelo de muitos a muitos, numa linha de ação horizontal e em via de mão dupla. DOI: 10.5585/remark.v8i2.2133

  9. Computation Directorate 2008 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Crawford, D L

    2009-03-25

    Whether a computer is simulating the aging and performance of a nuclear weapon, the folding of a protein, or the probability of rainfall over a particular mountain range, the necessary calculations can be enormous. Our computers help researchers answer these and other complex problems, and each new generation of system hardware and software widens the realm of possibilities. Building on Livermore's historical excellence and leadership in high-performance computing, Computation added more than 331 trillion floating-point operations per second (teraFLOPS) of power to LLNL's computer room floors in 2008. In addition, Livermore's next big supercomputer, Sequoia, advanced ever closer to its 2011-2012 delivery date, as architecture plans and the procurement contract were finalized. Hyperion, an advanced technology cluster test bed that teams Livermore with 10 industry leaders, made a big splash when it was announced during Michael Dell's keynote speech at the 2008 Supercomputing Conference. The Wall Street Journal touted Hyperion as a 'bright spot amid turmoil' in the computer industry. Computation continues to measure and improve the costs of operating LLNL's high-performance computing systems by moving hardware support in-house, by measuring causes of outages to apply resources asymmetrically, and by automating most of the account and access authorization and management processes. These improvements enable more dollars to go toward fielding the best supercomputers for science, while operating them at less cost and greater responsiveness to the customers.

  10. The influence of As/III pressure ratio on nitrogen nearest-neighbor environments in as-grown GaInNAs quantum wells

    International Nuclear Information System (INIS)

    Kudrawiec, R.; Poloczek, P.; Misiewicz, J.; Korpijaervi, V.-M.; Laukkanen, P.; Pakarinen, J.; Dumitrescu, M.; Guina, M.; Pessa, M.

    2009-01-01

    The energy fine structure, corresponding to different nitrogen nearest-neighbor environments, was observed in contactless electroreflectance (CER) spectra of as-grown GaInNAs quantum wells (QWs) obtained at various As/III pressure ratios. In the spectral range of the fundamental transition, two CER resonances were detected for samples grown at low As pressures whereas only one CER resonance was observed for samples obtained at higher As pressures. This resonance corresponds to the most favorable nitrogen nearest-neighbor environment in terms of the total crystal energy. It means that the nitrogen nearest-neighbor environment in GaInNAs QWs can be controlled in molecular beam epitaxy process by As/III pressure ratio.

  11. Aviation Research and the Internet

    Science.gov (United States)

    Scott, Antoinette M.

    1995-01-01

    The Internet is a network of networks. It was originally funded by the Defense Advanced Research Projects Agency or DOD/DARPA and evolved in part from the connection of supercomputer sites across the United States. The National Science Foundation (NSF) made the most of their supercomputers by connecting the sites to each other. This made the supercomputers more efficient and now allows scientists, engineers and researchers to access the supercomputers from their own labs and offices. The high speed networks that connect the NSF supercomputers form the backbone of the Internet. The World Wide Web (WWW) is a menu system. It gathers Internet resources from all over the world into a series of screens that appear on your computer. The WWW is also a distributed. The distributed system stores data information on many computers (servers). These servers can go out and get data when you ask for it. Hypermedia is the base of the WWW. One can 'click' on a section and visit other hypermedia (pages). Our approach to demonstrating the importance of aviation research through the Internet began with learning how to put pages on the Internet (on-line) ourselves. We were assigned two aviation companies; Vision Micro Systems Inc. and Innovative Aerodynamic Technologies (IAT). We developed home pages for these SBIR companies. The equipment used to create the pages were the UNIX and Macintosh machines. HTML Supertext software was used to write the pages and the Sharp JX600S scanner to scan the images. As a result, with the use of the UNIX, Macintosh, Sun, PC, and AXIL machines, we were able to present our home pages to over 800,000 visitors.

  12. OLICIES AND PRACTICES FOR IMPLEMENTATION OF IFRS AND NAS IN THE REPUBLIC OF MOLDOVA

    Directory of Open Access Journals (Sweden)

    Lica\tERHAN

    2015-06-01

    Full Text Available This study aims to analyse the process of harmonization of national accounting standards of the Republic of Moldova to the international standards. It highlights the main advantages, disadvantages, risks and opportunities regarding the implementation of the new standards. A major step for the Republic of Moldova was the implementation of IFRS, which has become mandatory for all public interest entities from 1 January 2012 and the adoption of new NAS in accordance with EU Directives and IFRS for small and medium-sized entities, for which the transition to IFRS was difficult due to high costs involved. The new NAS came into force on 1 January 2014 as a recommendation, but starting with 1st January 2015 it will be mandatory for all entities. The paper includes a practical analysis of the impact of transition to IFRS on the financial results of a public interest entity- Moldova Agroindbank, which is the largest commercial bank, with the highest market share in the banking sector of the Republic of Moldova. A result of the analysis of primary and secondary indicators calculated on the base of the financial statements prepared by commercial bank at 31.12.11, we found that the transition to IFRS has resulted in the growth of all financial indicators.

  13. Filmes biodegradáveis à base de proteínas miofibrilares de pescado Biodegradable films based on myofibrillar proteins of fish

    Directory of Open Access Journals (Sweden)

    Elessandra da Rosa Zavareze

    2012-05-01

    Full Text Available O objetivo deste trabalho foi estudar as propriedades físicas, mecânicas e de barreira dos filmes produzidos a partir de diferentes concentrações de proteínas miofibrilares de pescado de baixo valor comercial. O pescado utilizado foi a corvina (Micropogonias furnieri, que foi eviscerada e filetada. As proteínas miofibrilares foram obtidas do músculo, em sucessivas lavagens com água destilada. Os filmes foram produzidos com 3, 4 e 5% de proteínas miofibrilares pelo método de casting. Os filmes foram analisados nos seguintes aspectos: espessura, solubilidade, opacidade, resistência à tração, elongação e permeabilidade ao vapor de água (PVA. O aumento da concentração de proteínas miofibrilares atribuiu aos filmes maior espessura, opacidade, resistência à tração e PVA; no entanto, conferiu menor elongação na ruptura dos mesmos.The objective of this work was to study the physical, mechanical and barrier properties of the films produced from different concentrations of myofibrillar proteins of fish. The fish used was croaker (Micropogonias furnieri, which was gutted and filleted. The myofibrillar proteins were obtained through the muscle with successive washes with distilled water. The films were made with 3, 4 and 5% of myofibrillar proteins by the method of casting. The films were analyzed by thickness, solubility, opacity, tensile strength, elongation and water vapor permeability (PVA. The increase of myofibrillar proteins concentration in the films increased thickness, opacity, tensile strength and water vapor permeability and reduced elongation at break of the film.

  14. Programa Mais Educação: impactos e perspectivas nas escolas do campo

    Directory of Open Access Journals (Sweden)

    Cláudia da Mota Darós Parente

    2017-08-01

    Full Text Available This study aims to analyze the impacts of the “Mais Educação” Program in Brazilian countryside schools, with reflections on the limits and possibilities of the program and full-time education. Information was collected through electronic questionnaires sent to public schools participating in the “Mais Educação” Program. The research considered different aspects: expanding the school day; record of full-time enrollments in the school census; provision of human, educational and financial resources; changes in available spaces; provision of educational, cultural, artistic and sports activities; improvement in the communication process with the community; providing continuing education; changes in the political-pedagogical project and the school curriculum; changes in student behavior; improvement in school performance; improvement in the quality of school meals; development of partnerships; use of other available spaces. Through a quantitative and qualitative analysis, we identified significant impacts of the program in the countryside schools, especially with regard to the expansion of educational opportunities. However, the achieved benefits occur among the historical problems present in the countryside schools that were not overcome by virtue of the “Mais Educação” Program format and depend on the consideration of local governments (states, municipalities and Federal District. It presents reflections on the limits and possibilities of the “Mais Educação” Program and full-time education in the Brazilian countryside schools. O presente trabalho tem como objetivo analisar os impactos do Programa Mais Educação nas escolas do campo brasileiras, apresentando reflexões sobre limites e possibilidades do programa e da educação em tempo integral. Os dados foram coletados por meio de questionários eletrônicos enviados às escolas públicas participantes do Programa Mais Educação. A pesquisa considerou diferentes aspectos

  15. Mulheres e política nas notícias: Estereótipos de gênero e competência política

    Directory of Open Access Journals (Sweden)

    Flávia Biroli

    2012-10-01

    Full Text Available O artigo analisa representações de gênero presentes nas notícias das principais revistas semanais brasileiras. Constata que a presença reduzida de mulheres é acompanhada da existência de estereótipos que remetem a certas concepções do papel da mulher nas sociedades e de sua competência para atuar na vida pública. A relação da mulher com a vida privada é a espinha dorsal desses estereótipos. Seu complemento é o destaque à aparência e o reforço à beleza como um modo de distinção feminina. O artigo apresenta uma análise qualitativa da presença das três mulheres que tiveram maior visibilidade nas notícias no período analisado, os anos de 2006 e 2007: Heloisa Helena, Marta Suplicy e Dilma Rousseff. Essa análise permite discutir representações da feminilidade e da masculinidade, do privado e do público, que atribuem sentidos à presença diferenciada de homens e mulheres na política e na mídia.

  16. CRM NAS ORGANIZAÇÕES

    Directory of Open Access Journals (Sweden)

    Leonardo Arruda Ribas

    2005-06-01

    Full Text Available Frente às forças impostas pela globalização, Internet e evolução tecnológica, aliadas a uma era de descontinuidade, tem-se como resultado um novo tipo de consumidor, mais questionador e exigente, que as organizações têm de conquistar, de forma a atingir sua fidelização. Várias são as empresas que trabalham para conhecer melhor os seus clientes, operando mudanças das culturais organizacionais, que passam a ter o foco nas necessidades do seu público. Nesse contexto, muitas organizações implementam o CRM (Customer relationship management, objetivando maior integração com os clientes, através da coleta de informações sobre as atividades e necessidades destes, para entender o seu comportamento, obter sua satisfação e, conseqüentemente, sua retenção. Este trabalho pretende esclarecer a experiência do CRM e de sua implantação no âmbito internacional e nacional. Verificou-se forte tendência não apenas mundial, mas também das organizações brasileiras, à implementação do CRM. Uma das exigências fundamentais para sua implementação de sucesso é o completo entendimento dessa filosofia de trabalho e sua absorção pela cultura da organização. Outro aspecto relevante é a contribuição do suporte eletrônico (softwares na integração entre as vendas, o marketing e as funções de apoio ao cliente.

  17. Electroforese em papel das proteínas do líqüido cefalorraquidiano: IV. valores normais

    Directory of Open Access Journals (Sweden)

    A. Spina-França

    1960-03-01

    Full Text Available Foram analisadas, mediante electroforese em papel, as proteínas do LCR cisternal de 30 pessoas adultas (13 sadias e 17 portadoras de neuroses; as médias encontradas em relação às diversas frações proteicas foram as seguintes: pré-albumina 2,2%; albumina 51,6%; globulinas α1 5,0%, α2 8,7%, β (incluindo os percentuais da fração τ 21,6% e γ 10,9%. Em relação aos resultados encontrados para as frações proteicas do soro sangüíneo de 30 pessoas adultas (17 normais e 13 portadoras de neuroses incluindo 7 daquelas cujo LCR foi estudado, o perfil das proteínas do LCR mostrou-se diferente, pois no LCR verifica-se a presença da fração pré-albumina, maior riqueza em globulinas β e pequeno teor de γ-globulina.

  18. Right and Wrong and Cultural Diversity: Replication of the 2002 NAS/Zogby Poll on Business Ethics

    Science.gov (United States)

    Ludlum, Marty; Mascaloinov, Sergei

    2004-01-01

    In April 2002, a NAS/Zogby poll found that only a quarter of sampled students perceived uniform standards of "right and wrong" and that most students felt that ethical behavior depends on cultural diversity. In this effort to replicate those findings in a larger sample of American college students, the authors obtained results that…

  19. Determinação das espécies de cromo nas cinzas da incineração de couro wet-blue em reatores de leito fixo e leito fluidizado

    OpenAIRE

    Clauren Moura Martins

    2001-01-01

    Este trabalho tem como objetivo a determinação quantitativa da concentração de cromo (III, VI e total) nas cinzas de serragem de couro wet-blue, incineradas em reatores de leito fixo nas temperaturas de 450, 550 e 650 C e reator de leito fluidizado nas temperaturas de 730, 780, 830 e 850 C, verificando se estes resultados estão dentro dos limites permitidos pelas normas ambientais brasileiras e, consequentemente, se em relação à concentração de cromo, os processos de incineração utilizados sã...

  20. The TESS Science Processing Operations Center

    Science.gov (United States)

    Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd; hide

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover approximately 1,000 small planets with R(sub p) less than 4 (solar radius) and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).

  1. Advanced CFD simulation for the assessment of nuclear safety issues at EDF. Some examples

    International Nuclear Information System (INIS)

    Vare, Christophe

    2014-01-01

    EDF R and D has computer power that puts it amongst the top industrial research centers in the world. Its supercomputers and in-house codes as well as its experts represent important capabilities to support EDF activities (safety analyses, support to the design of new reactors, analysis of accidental situations non reproducible by experiments, better understanding of physics or complex system response, effects of uncertainties and identification of prominent parameters, qualification and optimization of processes and materials...). Advanced numerical simulation is a powerful tool allowing EDF to increase its competitiveness, improve its performance and the safety of its plants. On this issue, EDF made the choice to develop its own in-house codes, instead of using commercial software, in order to be able to capitalize its expertise and methodologies. This choice allowed as well easier technological transfer to the concerned business units or engineering divisions, fast adaptation of our simulation tools to emerging needs and the development of specific physics or functionalities not addressed by the commercial offer. During the last ten years, EDF has decided to open its in-house codes, through the Open Source way. This is the case for Code – Aster (structure analysis), Code – Saturne (computational fluid dynamics, CFD), TELEMAC (flow calculations in aquatic environment), SALOME (generic platform for Pre and Post-Processing) and SYRTHES (heat transfer in complex geometries), among others. The 3 open source software: Code – Aster, Code – Saturne and TELEMAC, are certified by the French Nuclear Regulatory Authority for many «Important to Safety» studies. Advanced simulation, which treats complex, multi-field and multi-physics problems, is of great importance for the assessment of nuclear safety issues. This paper will present 2 examples of advanced simulation using Code – Saturne for safety issues of nuclear power plants in the fields of R and D and

  2. Innovation Developments of Coal Chemistry Science in L.M. Litvinenko Institute of Physical-Organic Chemistry and Coal Chemistry of NAS of Ukraine

    Directory of Open Access Journals (Sweden)

    Shendrik, T.G.

    2015-11-01

    Full Text Available The article presents short historical review and innovation developments of Coal Chemistry Department of L.M. Litvinenko Institute, NAS of Ukraine connected with coal mine exploitation problems, search for decisions toward prevention of spontaneous combustion, dust control in mines, establishing structural chemical features of coal with different genesis and stages of metamorphism with the aim to develop new methods of their modification and rational use. The methods of obtaining inexpensive sorbents from Ukrainian raw materials (including carbon containing waste are proposed. The problems of modern coal chemistry science in IPOCC of NAS of Ukraine are outlined.

  3. Influência das práticas parentais nas estratégias de coping e de savoring utilizadas pelos adolescentes em contexto escolar

    OpenAIRE

    Tristão, Nádia Andreia Alves Farinha

    2009-01-01

    Tese de mestrado, Psicologia (Secção de Psicologia da Educação e da Orientação), 2009, Universidade de Lisboa, Faculdade de Psicologia e de Ciências da Educação O presente estudo foi realizado com o intuito de investigar a influência das práticas educativas parentais nas estratégias de coping utilizadas pelos adolescentes em contexto escolar e explorar o papel que assumem também nas suas estratégias de savoring. Nesse sentido adoptaram-se as perspectivas de Maccoby e Martin (1983) sobre as...

  4. The ASCI Network for SC '99: A Step on the Path to a 100 Gigabit Per Second Supercomputing Network

    Energy Technology Data Exchange (ETDEWEB)

    PRATT,THOMAS J.; TARMAN,THOMAS D.; MARTINEZ,LUIS M.; MILLER,MARC M.; ADAMS,ROGER L.; CHEN,HELEN Y.; BRANDT,JAMES M.; WYCKOFF,PETER S.

    2000-07-24

    This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within the community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.

  5. „Honda” korporācijas konkurētspēja Japānas un pasaules auto industrijas tirgū

    OpenAIRE

    Rubina, Jeļena

    2010-01-01

    Bakalaura darbs „„Honda” korporācijas konkurētspēja Japānas un pasaules auto industrijas tirgū” ir veltīts „Honda” korporācijas izpētei, balstoties uz vairākiem faktoriem, kuri ietekmējuši tās panākumus Japānas un pasaules auto industrijā. To skaitā ir korporācijas vēsture un pieredze moto industrijā, kas sekmējusi korporācijas stratēģijas izveidi, tehnoloģiju attīstīšana, kuru veic speciāla korporācijas filiāle, un to pielietošana auto būvniecībā, tai skaitā arī hibrīdauto un sacīkšu auto...

  6. OS USOS DO FACEBOOK NAS MANIFESTAÇÕES DOS SIMBOLISMOS ORGANIZACIONAIS

    Directory of Open Access Journals (Sweden)

    Camila Uliana Donna

    Full Text Available Este artigo tem o objetivo de compreender a relação entre os usos do Facebook pelos membros do jornal on-line XYZ e as manifestações dos simbolismos organizacionais. Para contextualizar o caminho adotado para tratar do objetivo foram articuladas contribuições teóricas sobre interacionismo simbólico, interpretativismo e simbolismo organizacional. Tais contribuições baseiam a discussão de que a interação social, a comunicação e os usos do Facebook estão relacionados entre si no cotidiano organizacional. A partir dessa relação no cotidiano, diferentes grupos sociais elaboram construções simbólicas com o potencial de marcar o contexto organizacional. Parte-se do entendimento de que isso ocorre na medida em que os simbolismos construídos interferem nas articulações entre os próprios grupos sociais nas organizações. O método qualitativo norteou a abordagem empírica neste estudo. A coleta de dados foi realizada mediante pesquisa bibliográfica e documental, netnografia e entrevistas semiestruturadas. O tratamento dos dados se deu por meio da análise de conteúdo, na modalidade temática. Após a análise, observou-se que o Facebook é um canal de trocas simbólicas entre os sujeitos na organização, porém, nessa mídia, essas trocas são veladas. Evidenciou-se um entendimento compartilhado de que no Facebook há muita exposição e por isso as pessoas têm medo de postar informações pessoais ou sobre o trabalho, pois acreditam estar sendo vigiadas. Nesse contexto, outras redes sociais digitais também foram identificadas como veículo de troca de conteúdos simbólicos.

  7. Comparación computacional de estructuras de proteínas. Aplicación al estudio de un inhibidor de carboxipeptidasa como agente antitumoral

    OpenAIRE

    Mas Benavente, José Manuel

    2001-01-01

    Consultable des del TDX Títol obtingut de la portada digitalitzada El objetivo general de esta tesis se enmarca en un proyecto más amplio de ingeniería de proteínas que trata de analizar y rediseñar la estructura, camino de plegamiento, función natural y aplicaciones biotecnológicas de una proteína, el PCI (Potato Carboxypeptidase Inhibitor). Las características estructurales de esta proteína, esencialmente sus puentes de azufre, nos invitaron a realizar un estudio general en proteínas ...

  8. Nonlinear dynamics of non-equilibrium holes in p-type modulation-doped GaInNAs/GaAs quantum wells

    Directory of Open Access Journals (Sweden)

    Amann Andreas

    2011-01-01

    Full Text Available Abstract Nonlinear charge transport parallel to the layers of p-modulation-doped GaInNAs/GaAs quantum wells (QWs is studied both theoretically and experimentally. Experimental results show that at low temperature, T = 13 K, the presence of an applied electric field of about 6 kV/cm leads to the heating of the high mobility holes in the GaInNAs QWs, and their real-space transfer (RST into the low-mobility GaAs barriers. This results in a negative differential mobility and self-generated oscillatory instabilities in the RST regime. We developed an analytical model based upon the coupled nonlinear dynamics of the real-space hole transfer and of the interface potential barrier controlled by space-charge in the doped GaAs layer. Our simulation results predict dc bias-dependent self-generated current oscillations with frequencies in the high microwave range.

  9. Representações sociais e território nas letras de funk proibido de facção

    Directory of Open Access Journals (Sweden)

    Andréa Rodriguez

    2011-12-01

    Full Text Available Este artigo apresenta, por meio de uma análise de letras de funk “proibido de facção”, uma leitura da territorialidade do tráfico de drogas nas favelas. Esse estilo musical, que expressa uma cultura específica da juventude urbana pobre, revela cenas de um cotidiano pouco conhecido pela maior parte da população. Para tal, foi realizada a análise do conteúdo de 50 letras de funk, de forma a se compreenderem questões que esse tipo de narrativa faz circular. Destaca-se o território e a territorialidade como categorias de análise que revelam a dinâmica que o tráfico de drogas impõe nos espaços populares da cidade e que incidem diretamente nas representações sociais e práticas dos moradores das favelas.

  10. Influência regional no consumo precoce de alimentos diferentes do leite materno em menores de seis meses residentes nas capitais brasileiras e Distrito Federal

    OpenAIRE

    Saldiva,Silvia Regina Dias Medici; Venancio,Sonia Isoyama; Gouveia,Ana Gabriela Cepeda; Castro,Ana Lucia da Silva; Escuder,Maria Mercedes Loureiro; Giugliani,Elsa Regina Justo

    2011-01-01

    Objetivou-se avaliar a influência regional no consumo precoce de alimentos diferentes do leite materno em menores de seis meses residentes nas capitais brasileiras. Analisaram-se dados de 18.929 crianças da II Pesquisa de Prevalência de Aleitamento Materno nas Capitais Brasileiras - 2008. As frequências do consumo de chá, sucos, leite artificial e mingau/papa foram calculadas para as capitais das cinco regiões brasileiras. Curvas do consumo foram obtidas pela análise de logitos e estimativas ...

  11. The Erasmus Computing Grid – Building a Super-Computer for Free

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); A. Abuseiris (Anis); R.M. de Graaf (Rob); M. Lesnussa (Michael); F.G. Grosveld (Frank)

    2011-01-01

    textabstractToday advances in scientific research as well as clinical diagnostics and treatment are inevitably connected with information solutions concerning computation power and information storage. The needs for information technology are enormous and are in many cases the limiting factor for

  12. Grau de Concorrência e Poder de Mercado nas Exportações de Leite em Pó para o Brasil

    Directory of Open Access Journals (Sweden)

    Lucas Campio Pinha

    Full Text Available Resumo: Há indícios de que Argentina e Uruguai atuem como oligopolistas e exerçam poder de mercado nas exportações de leite em pó para o Brasil. As firmas dos dois países supracitados exportam quase a totalidade destes derivados para o território brasileiro, ao passo que há uma regionalização do comércio internacional destes produtos, restringindo a concorrência com outros países. O objetivo central do presente trabalho é verificar o grau de concorrência nas exportações de leite em pó integral e desnatado para o Brasil pela existência do poder de mercado praticado pelos países exportadores. Para isto, utiliza-se o modelo de demanda residual, estimado por mínimo quadrado de dois estágios (MQ2E, seemingly unrelated regressions (SUR e mínimo quadrado de três estágios (MQ3E. Os resultados indicam que o Uruguai atua como oligopolista e exerce poder de mercado em ambos os mercados, enquanto a Argentina, apenas no caso do leite em pó integral. Os resultados sugerem que o Brasil deve buscar meios de elevar a concorrência nas importações de leite em pó, já que, assim, os preços tenderiam a ser menores e os consumidores se beneficiariam neste aspecto.

  13. Production technology of an electrolyte for Na/S batteries

    Science.gov (United States)

    Heimke, G.; Mayer, H.; Reckziegel, A.

    1982-05-01

    The trend to develop a cheap electrochemical electric battery and the development of the Na/S system are discussed. The main element in this type of battery is the beta Al2O3 solid electrolyte. Characteristics for this material of first importance are: specific surface, density of green and of sintered material, absence of cracks, gas permeability, resistance to flexion, purity, electrical conductivity, crystal structure and dimensions. Influence of production method on all these characteristics were investigated, e.g., method of compacting powder, tunnel kiln sintering versus static chamber furnace sintering, sintering inside a container or not, and type of kiln material when sintering in a container. In the stationary chamber furnace, beta alumina ceramics were produced with a density of 3.2 g/cm3, a mechanical strength higher than 160 MPa, and an electrical conductivity of about 0.125 Ohm-1cm-1 at 300 C. The best kiln material proved to be MgO and MgAl2O3.MgO ceramics.

  14. Verification of self-report of zygosity determined via DNA testing in a subset of the NAS-NRC twin registry 40 years later.

    Science.gov (United States)

    Reed, Terry; Plassman, Brenda L; Tanner, Caroline M; Dick, Danielle M; Rinehart, Shannon A; Nichols, William C

    2005-08-01

    The National Academy of Sciences -- National Research Council (NAS-NRC) twin panel, created nearly 50 years ago, had twin zygosity determined primarily via a similarity questionnaire that has been estimated to correctly classify at least 95% of twins. In the course of a study on the genetics of healthy ageing in the NAS-NRC twins, DNA was collected for genome-wide scanning and zygosity confirmation was examined in 343 participating pairs. The sample was supplemented from two other studies using NAS-NRC twins where one or both co-twins were suspected to have Alzheimer disease or another dementia, or Parkinson's disease. Overall 578 twin pairs with DNA were analyzed. Zygosity assignment for 96.8% (519/536) was confirmed via questionnaire. Among 42 pairs whose questionnaire responses were inconclusive for assigning zygosity, 50% were found to be monozygous (MZ) and 50% were dizygous (DZ). There was some evidence for greater misclassification of presumed DZ pairs in the healthy ageing study where participation favored pairs who were similar in having a favorable health history and willingness to volunteer without any element of perceived risk for a specific disease influencing participation.

  15. Advances in toxicology and medical treatment of chemical warfare nerve agents

    Science.gov (United States)

    2012-01-01

    Organophosphorous (OP) Nerve agents (NAs) are known as the deadliest chemical warfare agents. They are divided into two classes of G and V agents. Most of them are liquid at room temperature. NAs chemical structures and mechanisms of actions are similar to OP pesticides, but their toxicities are higher than these compounds. The main mechanism of action is irreversible inhibition of Acetyl Choline Esterase (AChE) resulting in accumulation of toxic levels of acetylcholine (ACh) at the synaptic junctions and thus induces muscarinic and nicotinic receptors stimulation. However, other mechanisms have recently been described. Central nervous system (CNS) depression particularly on respiratory and vasomotor centers may induce respiratory failure and cardiac arrest. Intermediate syndrome after NAs exposure is less common than OP pesticides poisoning. There are four approaches to detect exposure to NAs in biological samples: (I) AChE activity measurement, (II) Determination of hydrolysis products in plasma and urine, (III) Fluoride reactivation of phosphylated binding sites and (IV) Mass spectrometric determination of cholinesterase adducts. The clinical manifestations are similar to OP pesticides poisoning, but with more severity and fatalities. The management should be started as soon as possible. The victims should immediately be removed from the field and treatment is commenced with auto-injector antidotes (atropine and oximes) such as MARK I kit. A 0.5% hypochlorite solution as well as novel products like M291 Resin kit, G117H and Phosphotriesterase isolated from soil bacterias, are now available for decontamination of NAs. Atropine and oximes are the well known antidotes that should be infused as clinically indicated. However, some new adjuvant and additional treatment such as magnesium sulfate, sodium bicarbonate, gacyclidine, benactyzine, tezampanel, hemoperfusion, antioxidants and bioscavengers have recently been used for OP NAs poisoning. PMID:23351280

  16. NÃveis de isoleucina digestÃvel sobre o desempenho de fÃmeas suÃnas dos 15 aos 30 kg

    OpenAIRE

    Leandro Dalcin Castilha

    2011-01-01

    Com o objetivo de determinar a exigÃncia de isoleucina digestÃvel para fÃmeas suÃnas dos 15 aos 30 kg, foram realizados dois experimentos, um ensaio de desempenho e um balanÃo do nitrogÃnio. No primeiro experimento, foram utilizadas 40 fÃmeas suÃnas, mestiÃas, de alto potencial genÃtico e desempenho mÃdio, com peso vivo inicial de 15,00 Â 0,52kg, distribuÃdas em um delineamento experimental de blocos ao acaso, constituÃdo de cinco tratamentos (0,45; 0,52; 0,59; 0,66; 0,73% de isoleucina diges...

  17. Expresión diferencial de proteínas cardiacas en ratas diabéticas tipo Sprague-Dawley Differential heart protein expression in diabetic type Sprague-Dawley rats

    Directory of Open Access Journals (Sweden)

    Richard Southgate

    Full Text Available Se purificaron proteínas cardiacas a partir de ratas diabéticas y sanas de tipo Sprague-Dawley. Las proteínas fueron fraccionadas por medio de gel de electroforesis de dos dimensiones (2D-PAGE. La separación resultante fue visualizada por tinción con Coomassie azul. Luego de ser convertidas en imagen digital, las proteínas del grupo diabético y del grupo control fueron comparadas y correlacionadas para determinar los niveles de expresión diferenciada. Sesenta de las ciento ochenta proteínas en el gel fueron removidas y digeridas en fragmentos pequeños de péptidos, los cuales se analizaron por medio de espectrometría de masas para determinar la estructura primaria de los péptidos resultantes (secuencia de los aminoácidos. Esta información se registró en una base de datos (http://www.ncbi.nlm.nih.gov/ para determinar la identidad de las proteínas precedentes a los péptidos. Se determinó la identidad de las proteínas expresadas diferencialmente en el tejido cardiaco de ambos grupos; se encontraron varias proteínas expresadas en diferentes niveles a los normales cuando se analizaron los corazones de ratas diabéticas, incluyendo fosfatasa de tiroxina (PTP, Q60998, receptores de lipoproteínas de muy baja densidad (VLDL-R, P98156, perioxidasa de glutation (PHGPx, O70325, transferasa de serina hidroxilamina (SHMT, P50431, adenylyl cyclasa proteína 1 asociada (CAP1, P40124 y teletonina (TELT, O70548.Cardiac proteins were isolated from diabetic and wild type Sprague-Dawley rats, then fractionated by two dimensional gel electrophoresis (2D-PAGE using isoelectric focusing and molecular weight. The resulting protein spots were stained to facilitate detection. After being converted into a digital image, the proteins on the diabetic and wild type gels were matched to each other then compared to determine levels of expression. Sixty of the one hundred and eighty proteins on the gel were removed and digested to produce peptide fragments

  18. QNAP 1263U Network Attached Storage (NAS)/ Storage Area Network (SAN) Device Users Guide

    Science.gov (United States)

    2016-11-01

    position unless so designated by other authorized documents. Citation of manufacturer’s or trade names does not constitute an official endorsement or...Message Block, and newer standards such as Internet Small Computer Systems Interface. The differences in the protocols also play an important role in...Mapping the Network Drive 4 5.1 Windows 7 4 5.2 Windows 10 6 6. Connecting to the iSCSI on the NAS 6 7. Adding a New IQN to iSCSI ACL 7 8

  19. Paradigma preventivo e lógica identitária nas abordagens sobre o Hip Hop

    Directory of Open Access Journals (Sweden)

    Rodrigo Lages e Silva

    2008-06-01

    Full Text Available Este artigo faz uma revisão das pesquisas acadêmicas sobre o Hip Hop, apontando a preponderância do conceito de identidade nas referidas teorizações. Neste sentido, pretende-se contextualizar a ascensão do conceito de identidade nas visões sobre o Hip Hop, em direção a sua conjugação com o conceito de cidadania. Tal lógica identitária está intimamente ligada à construção da noção de categorias desviantes. Forjada sob o pano de fundo da urbanização, a lógica identitária é a expressão acadêmica de uma racionalidade a que denominamos de paradigma preventivo, cuja função é antecipar-se a uma eventual potência violenta que os jovens moradores de periferia representariam. Portanto, trata-se de entender a fabricação do subúrbio e da juventude como problema social, analisando, assim, as concepções que dão sustentação aos ideais corretivos e moralizantes que as abordagens sobre o Hip Hop expressam através de uma ênfase nos seus benefícios identitários.

  20. Trends in computerized structural analysis and synthesis; Proceedings of the Symposium, Washington, D.C., October 30-November 1, 1978

    Science.gov (United States)

    Noor, A. K. (Editor); Mccomb, H. G., Jr.

    1978-01-01

    The subjects considered are related to future directions of structural applications and potential of new computing systems, advances and trends in data management and engineering software development, advances in applied mathematics and symbolic computing, computer-aided instruction and interactive computer graphics, nonlinear analysis, dynamic analysis and transient response, structural synthesis, structural analysis and design systems, advanced structural applications, supercomputers, numerical analysis, and trends in software systems. Attention is given to the reliability and optimality of the finite element method, computerized symbolic manipulation in structural mechanics, a standard computer graphics subroutine package, and a drag method as a finite element mesh generation scheme.