WorldWideScience

Sample records for advanced supercomputing nas

  1. NASA Advanced Supercomputing Facility Expansion

    Science.gov (United States)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  2. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  3. Advanced Architectures for Astrophysical Supercomputing

    Science.gov (United States)

    Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.

    2010-12-01

    Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.

  4. Supercomputing - Use Cases, Advances, The Future (2/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the second day, we will focus on software and software paradigms driving supercomputers, workloads that need supercomputing treatment, advances in technology and possible future developments. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and i...

  5. Supercomputing - Use Cases, Advances, The Future (1/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the first day, we will focus on the history and theory of supercomputing, the top500 list and the hardware that makes supercomputers tick. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP an...

  6. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  7. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-03-29

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  8. Supercomputational science

    CERN Document Server

    Wilson, S

    1990-01-01

    In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School ...

  9. Japanese supercomputer technology

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Ewald, R.H.; Worlton, W.J.

    1982-01-01

    In February 1982, computer scientists from the Los Alamos National Laboratory and Lawrence Livermore National Laboratory visited several Japanese computer manufacturers. The purpose of these visits was to assess the state of the art of Japanese supercomputer technology and to advise Japanese computer vendors of the needs of the US Department of Energy (DOE) for more powerful supercomputers. The Japanese foresee a domestic need for large-scale computing capabilities for nuclear fusion, image analysis for the Earth Resources Satellite, meteorological forecast, electrical power system analysis (power flow, stability, optimization), structural and thermal analysis of satellites, and very large scale integrated circuit design and simulation. To meet this need, Japan has launched an ambitious program to advance supercomputer technology. This program is described

  10. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  11. An assessment of worldwide supercomputer usage

    Energy Technology Data Exchange (ETDEWEB)

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  12. The ETA10 supercomputer system

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, Inc. ETA 10 is a next-generation supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamics MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed. (orig.)

  13. What is supercomputing ?

    International Nuclear Information System (INIS)

    Asai, Kiyoshi

    1992-01-01

    Supercomputing means the high speed computation using a supercomputer. Supercomputers and the technical term ''supercomputing'' have spread since ten years ago. The performances of the main computers installed so far in Japan Atomic Energy Research Institute are compared. There are two methods to increase computing speed by using existing circuit elements, parallel processor system and vector processor system. CRAY-1 is the first successful vector computer. Supercomputing technology was first applied to meteorological organizations in foreign countries, and to aviation and atomic energy research institutes in Japan. The supercomputing for atomic energy depends on the trend of technical development in atomic energy, and the contents are divided into the increase of computing speed in existing simulation calculation and the acceleration of the new technical development of atomic energy. The examples of supercomputing in Japan Atomic Energy Research Institute are reported. (K.I.)

  14. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee; Kaushik, Dinesh; Winfer, Andrew

    2011-01-01

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  15. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee

    2011-11-15

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  16. The ETA systems plans for supercomputers

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, is a class VII supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamic MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed

  17. A training program for scientific supercomputing users

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  18. Supercomputers to transform Science

    CERN Multimedia

    2006-01-01

    "New insights into the structure of space and time, climate modeling, and the design of novel drugs, are but a few of the many research areas that will be transforned by the installation of three supercomputers at the Unversity of Bristol." (1/2 page)

  19. Supercomputers Of The Future

    Science.gov (United States)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  20. Supercomputer applications in nuclear research

    International Nuclear Information System (INIS)

    Ishiguro, Misako

    1992-01-01

    The utilization of supercomputers in Japan Atomic Energy Research Institute is mainly reported. The fields of atomic energy research which use supercomputers frequently and the contents of their computation are outlined. What is vectorizing is simply explained, and nuclear fusion, nuclear reactor physics, the hydrothermal safety of nuclear reactors, the parallel property that the atomic energy computations of fluids and others have, the algorithm for vector treatment and the effect of speed increase by vectorizing are discussed. At present Japan Atomic Energy Research Institute uses two systems of FACOM VP 2600/10 and three systems of M-780. The contents of computation changed from criticality computation around 1970, through the analysis of LOCA after the TMI accident, to nuclear fusion research, the design of new type reactors and reactor safety assessment at present. Also the method of using computers advanced from batch processing to time sharing processing, from one-dimensional to three dimensional computation, from steady, linear to unsteady nonlinear computation, from experimental analysis to numerical simulation and so on. (K.I.)

  1. Introduction to Reconfigurable Supercomputing

    CERN Document Server

    Lanzagorta, Marco; Rosenberg, Robert

    2010-01-01

    This book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPe who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigur

  2. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  3. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  4. Quantum Hamiltonian Physics with Supercomputers

    International Nuclear Information System (INIS)

    Vary, James P.

    2014-01-01

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed

  5. Quantum Hamiltonian Physics with Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Vary, James P.

    2014-06-15

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed.

  6. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  7. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  8. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 2

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  9. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 1

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  10. Computational Dimensionalities of Global Supercomputing

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2013-12-01

    Full Text Available This Invited Paper pertains to subject of my Plenary Keynote Speech at the 17th World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI 2013 held in Orlando, Florida on July 9-12, 2013. The title of my Plenary Keynote Speech was: "Dimensionalities of Computation: from Global Supercomputing to Data, Text and Web Mining" but this Invited Paper will focus only on the "Computational Dimensionalities of Global Supercomputing" and is based upon a summary of the contents of several individual articles that have been previously written with myself as lead author and published in [75], [76], [77], [78], [79], [80] and [11]. The topics of these of the Plenary Speech included Overview of Current Research in Global Supercomputing [75], Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing [76], Data Mining Supercomputing with SAS™ JMP® Genomics ([77], [79], [80], and Visualization by Supercomputing Data Mining [81]. ______________________ [11.] Committee on the Future of Supercomputing, National Research Council (2003, The Future of Supercomputing: An Interim Report, ISBN-13: 978-0-309-09016- 2, http://www.nap.edu/catalog/10784.html [75.] Segall, Richard S.; Zhang, Qingyu and Cook, Jeffrey S.(2013, "Overview of Current Research in Global Supercomputing", Proceedings of Forty- Fourth Meeting of Southwest Decision Sciences Institute (SWDSI, Albuquerque, NM, March 12-16, 2013. [76.] Segall, Richard S. and Zhang, Qingyu (2010, "Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing", Proceedings of 5th INFORMS Workshop on Data Mining and Health Informatics, Austin, TX, November 6, 2010. [77.] Segall, Richard S., Zhang, Qingyu and Pierce, Ryan M.(2010, "Data Mining Supercomputing with SAS™ JMP®; Genomics: Research-in-Progress, Proceedings of 2010 Conference on Applied Research in Information Technology, sponsored by

  11. Role of supercomputers in magnetic fusion and energy research programs

    International Nuclear Information System (INIS)

    Killeen, J.

    1985-06-01

    The importance of computer modeling in magnetic fusion (MFE) and energy research (ER) programs is discussed. The need for the most advanced supercomputers is described, and the role of the National Magnetic Fusion Energy Computer Center in meeting these needs is explained

  12. World's fastest supercomputer opens up to users

    Science.gov (United States)

    Xin, Ling

    2016-08-01

    China's latest supercomputer - Sunway TaihuLight - has claimed the crown as the world's fastest computer according to the latest TOP500 list, released at the International Supercomputer Conference in Frankfurt in late June.

  13. Supercomputing and related national projects in Japan

    International Nuclear Information System (INIS)

    Miura, Kenichi

    1985-01-01

    Japanese supercomputer development activities in the industry and research projects are outlined. Architecture, technology, software, and applications of Fujitsu's Vector Processor Systems are described as an example of Japanese supercomputers. Applications of supercomputers to high energy physics are also discussed. (orig.)

  14. Mistral Supercomputer Job History Analysis

    OpenAIRE

    Zasadziński, Michał; Muntés-Mulero, Victor; Solé, Marc; Ludwig, Thomas

    2018-01-01

    In this technical report, we show insights and results of operational data analysis from petascale supercomputer Mistral, which is ranked as 42nd most powerful in the world as of January 2018. Data sources include hardware monitoring data, job scheduler history, topology, and hardware information. We explore job state sequences, spatial distribution, and electric power patterns.

  15. Supercomputers and quantum field theory

    International Nuclear Information System (INIS)

    Creutz, M.

    1985-01-01

    A review is given of why recent simulations of lattice gauge theories have resulted in substantial demands from particle theorists for supercomputer time. These calculations have yielded first principle results on non-perturbative aspects of the strong interactions. An algorithm for simulating dynamical quark fields is discussed. 14 refs

  16. Computational plasma physics and supercomputers

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1984-09-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular codes, but parallel processing poses new coding difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematics

  17. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  18. ATLAS Software Installation on Supercomputers

    CERN Document Server

    Undrus, Alexander; The ATLAS collaboration

    2018-01-01

    PowerPC and high performance computers (HPC) are important resources for computing in the ATLAS experiment. The future LHC data processing will require more resources than Grid computing, currently using approximately 100,000 cores at well over 100 sites, can provide. Supercomputers are extremely powerful as they use resources of hundreds of thousands CPUs joined together. However their architectures have different instruction sets. ATLAS binary software distributions for x86 chipsets do not fit these architectures, as emulation of these chipsets results in huge performance loss. This presentation describes the methodology of ATLAS software installation from source code on supercomputers. The installation procedure includes downloading the ATLAS code base as well as the source of about 50 external packages, such as ROOT and Geant4, followed by compilation, and rigorous unit and integration testing. The presentation reports the application of this procedure at Titan HPC and Summit PowerPC at Oak Ridge Computin...

  19. Status of supercomputers in the US

    International Nuclear Information System (INIS)

    Fernbach, S.

    1985-01-01

    Current Supercomputers; that is, the Class VI machines which first became available in 1976 are being delivered in greater quantity than ever before. In addition, manufacturers are busily working on Class VII machines to be ready for delivery in CY 1987. Mainframes are being modified or designed to take on some features of the supercomputers and new companies with the intent of either competing directly in the supercomputer arena or in providing entry-level systems from which to graduate to supercomputers are springing up everywhere. Even well founded organizations like IBM and CDC are adding machines with vector instructions in their repertoires. Japanese - manufactured supercomputers are also being introduced into the U.S. Will these begin to compete with those of U.S. manufacture. Are they truly competitive. It turns out that both from the hardware and software points of view they may be superior. We may be facing the same problems in supercomputers that we faced in videosystems

  20. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    Science.gov (United States)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  1. TOP500 Supercomputers for June 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  2. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  3. Comparison of 250 MHz R10K Origin 2000 and 400 MHz Origin 2000 Using NAS Parallel Benchmarks

    Science.gov (United States)

    Turney, Raymond D.; Thigpen, William W. (Technical Monitor)

    2001-01-01

    This report describes results of benchmark tests on Steger, a 250 MHz Origin 2000 system with R10K processors, currently installed at the NASA Ames National Advanced Supercomputing (NAS) facility. For comparison purposes, the tests were also run on Lomax, a 400 MHz Origin 2000 with R12K processors. The BT, LU, and SP application benchmarks in the NAS Parallel Benchmark Suite and the kernel benchmark FT were chosen to measure system performance. Having been written to measure performance on Computational Fluid Dynamics applications, these benchmarks are assumed appropriate to represent the NAS workload. Since the NAS runs both message passing (MPI) and shared-memory, compiler directive type codes, both MPI and OpenMP versions of the benchmarks were used. The MPI versions used were the latest official release of the NAS Parallel Benchmarks, version 2.3. The OpenMP versions used were PBN3b2, a beta version that is in the process of being released. NPB 2.3 and PBN3b2 are technically different benchmarks, and NPB results are not directly comparable to PBN results.

  4. TOP500 Supercomputers for June 2005

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  5. TOP500 Supercomputers for November 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  6. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy; Katz, Daniel S.; Binkowski, T. Andrew; Zhong, Xiaoliang; Heinonen, Olle; Karpeyev, Dmitry; Wilde, Michael

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt's sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.

  7. Proceedings of the first energy research power supercomputer users symposium

    International Nuclear Information System (INIS)

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. ''Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University

  8. The Pawsey Supercomputer geothermal cooling project

    Science.gov (United States)

    Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.

    2010-12-01

    The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air

  9. TOP500 Supercomputers for November 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  10. Plasma turbulence calculations on supercomputers

    International Nuclear Information System (INIS)

    Carreras, B.A.; Charlton, L.A.; Dominguez, N.; Drake, J.B.; Garcia, L.; Leboeuf, J.N.; Lee, D.K.; Lynch, V.E.; Sidikman, K.

    1991-01-01

    Although the single-particle picture of magnetic confinement is helpful in understanding some basic physics of plasma confinement, it does not give a full description. Collective effects dominate plasma behavior. Any analysis of plasma confinement requires a self-consistent treatment of the particles and fields. The general picture is further complicated because the plasma, in general, is turbulent. The study of fluid turbulence is a rather complex field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples back to the fluid. Since the fluid is not a perfect conductor, this turbulence can lead to changes in the topology of the magnetic field structure, causing the magnetic field lines to wander radially. Because the plasma fluid flows along field lines, they carry the particles with them, and this enhances the losses caused by collisions. The changes in topology are critical for the plasma confinement. The study of plasma turbulence and the concomitant transport is a challenging problem. Because of the importance of solving the plasma turbulence problem for controlled thermonuclear research, the high complexity of the problem, and the necessity of attacking the problem with supercomputers, the study of plasma turbulence in magnetic confinement devices is a Grand Challenge problem

  11. Supercomputers and the mathematical modeling of high complexity problems

    International Nuclear Information System (INIS)

    Belotserkovskii, Oleg M

    2010-01-01

    This paper is a review of many works carried out by members of our scientific school in past years. The general principles of constructing numerical algorithms for high-performance computers are described. Several techniques are highlighted and these are based on the method of splitting with respect to physical processes and are widely used in computing nonlinear multidimensional processes in fluid dynamics, in studies of turbulence and hydrodynamic instabilities and in medicine and other natural sciences. The advances and developments related to the new generation of high-performance supercomputing in Russia are presented.

  12. Numerical aerodynamic simulation (NAS)

    International Nuclear Information System (INIS)

    Peterson, V.L.; Ballhaus, W.F. Jr.; Bailey, F.R.

    1984-01-01

    The Numerical Aerodynamic Simulation (NAS) Program is designed to provide a leading-edge computational capability to the aerospace community. It was recognized early in the program that, in addition to more advanced computers, the entire computational process ranging from problem formulation to publication of results needed to be improved to realize the full impact of computational aerodynamics. Therefore, the NAS Program has been structured to focus on the development of a complete system that can be upgraded periodically with minimum impact on the user and on the inventory of applications software. The implementation phase of the program is now under way. It is based upon nearly 8 yr of study and should culminate in an initial operational capability before 1986. The objective of this paper is fivefold: 1) to discuss the factors motivating the NAS program, 2) to provide a history of the activity, 3) to describe each of the elements of the processing-system network, 4) to outline the proposed allocation of time to users of the facility, and 5) to describe some of the candidate problems being considered for the first benchmark codes

  13. TOP500 Supercomputers for June 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  14. Desktop supercomputer: what can it do?

    Science.gov (United States)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  15. Desktop supercomputer: what can it do?

    International Nuclear Information System (INIS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-01-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  16. TOP500 Supercomputers for June 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  17. Status reports of supercomputing astrophysics in Japan

    International Nuclear Information System (INIS)

    Nakamura, Takashi; Nagasawa, Mikio

    1990-01-01

    The Workshop on Supercomputing Astrophysics was held at National Laboratory for High Energy Physics (KEK, Tsukuba) from August 31 to September 2, 1989. More than 40 participants of physicists, astronomers were attendant and discussed many topics in the informal atmosphere. The main purpose of this workshop was focused on the theoretical activities in computational astrophysics in Japan. It was also aimed to promote effective collaboration between the numerical experimentists working on supercomputing technique. The various subjects of the presented papers of hydrodynamics, plasma physics, gravitating systems, radiative transfer and general relativity are all stimulating. In fact, these numerical calculations become possible now in Japan owing to the power of Japanese supercomputer such as HITAC S820, Fujitsu VP400E and NEC SX-2. (J.P.N.)

  18. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  19. Adaptability of supercomputers to nuclear computations

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Ishiguro, Misako; Matsuura, Toshihiko.

    1983-01-01

    Recently in the field of scientific and technical calculation, the usefulness of supercomputers represented by CRAY-1 has been recognized, and they are utilized in various countries. The rapid computation of supercomputers is based on the function of vector computation. The authors investigated the adaptability to vector computation of about 40 typical atomic energy codes for the past six years. Based on the results of investigation, the adaptability of the function of vector computation that supercomputers have to atomic energy codes, the problem regarding the utilization and the future prospect are explained. The adaptability of individual calculation codes to vector computation is largely dependent on the algorithm and program structure used for the codes. The change to high speed by pipeline vector system, the investigation in the Japan Atomic Energy Research Institute and the results, and the examples of expressing the codes for atomic energy, environmental safety and nuclear fusion by vector are reported. The magnification of speed up for 40 examples was from 1.5 to 9.0. It can be said that the adaptability of supercomputers to atomic energy codes is fairly good. (Kako, I.)

  20. Computational plasma physics and supercomputers. Revision 1

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1985-01-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular models, but parallel processing poses new programming difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematical models

  1. ASCI's Vision for supercomputing future

    International Nuclear Information System (INIS)

    Nowak, N.D.

    2003-01-01

    The full text of publication follows. Advanced Simulation and Computing (ASC, formerly Accelerated Strategic Computing Initiative [ASCI]) was established in 1995 to help Defense Programs shift from test-based confidence to simulation-based confidence. Specifically, ASC is a focused and balanced program that is accelerating the development of simulation capabilities needed to analyze and predict the performance, safety, and reliability of nuclear weapons and certify their functionality - far exceeding what might have been achieved in the absence of a focused initiative. To realize its vision, ASC is creating simulation and proto-typing capabilities, based on advanced weapon codes and high-performance computing

  2. Centralized supercomputer support for magnetic fusion energy research

    International Nuclear Information System (INIS)

    Fuss, D.; Tull, G.G.

    1984-01-01

    High-speed computers with large memories are vital to magnetic fusion energy research. Magnetohydrodynamic (MHD), transport, equilibrium, Vlasov, particle, and Fokker-Planck codes that model plasma behavior play an important role in designing experimental hardware and interpreting the resulting data, as well as in advancing plasma theory itself. The size, architecture, and software of supercomputers to run these codes are often the crucial constraints on the benefits such computational modeling can provide. Hence, vector computers such as the CRAY-1 offer a valuable research resource. To meet the computational needs of the fusion program, the National Magnetic Fusion Energy Computer Center (NMFECC) was established in 1974 at the Lawrence Livermore National Laboratory. Supercomputers at the central computing facility are linked to smaller computer centers at each of the major fusion laboratories by a satellite communication network. In addition to providing large-scale computing, the NMFECC environment stimulates collaboration and the sharing of computer codes and data among the many fusion researchers in a cost-effective manner

  3. Plane-wave electronic structure calculations on a parallel supercomputer

    International Nuclear Information System (INIS)

    Nelson, J.S.; Plimpton, S.J.; Sears, M.P.

    1993-01-01

    The development of iterative solutions of Schrodinger's equation in a plane-wave (pw) basis over the last several years has coincided with great advances in the computational power available for performing the calculations. These dual developments have enabled many new and interesting condensed matter phenomena to be studied from a first-principles approach. The authors present a detailed description of the implementation on a parallel supercomputer (hypercube) of the first-order equation-of-motion solution to Schrodinger's equation, using plane-wave basis functions and ab initio separable pseudopotentials. By distributing the plane-waves across the processors of the hypercube many of the computations can be performed in parallel, resulting in decreases in the overall computation time relative to conventional vector supercomputers. This partitioning also provides ample memory for large Fast Fourier Transform (FFT) meshes and the storage of plane-wave coefficients for many hundreds of energy bands. The usefulness of the parallel techniques is demonstrated by benchmark timings for both the FFT's and iterations of the self-consistent solution of Schrodinger's equation for different sized Si unit cells of up to 512 atoms

  4. A supercomputing application for reactors core design and optimization

    International Nuclear Information System (INIS)

    Hourcade, Edouard; Gaudier, Fabrice; Arnaud, Gilles; Funtowiez, David; Ammar, Karim

    2010-01-01

    Advanced nuclear reactor designs are often intuition-driven processes where designers first develop or use simplified simulation tools for each physical phenomenon involved. Through the project development, complexity in each discipline increases and implementation of chaining/coupling capabilities adapted to supercomputing optimization process are often postponed to a further step so that task gets increasingly challenging. In the context of renewal in reactor designs, project of first realization are often run in parallel with advanced design although very dependant on final options. As a consequence, the development of tools to globally assess/optimize reactor core features, with the on-going design methods accuracy, is needed. This should be possible within reasonable simulation time and without advanced computer skills needed at project management scale. Also, these tools should be ready to easily cope with modeling progresses in each discipline through project life-time. An early stage development of multi-physics package adapted to supercomputing is presented. The URANIE platform, developed at CEA and based on the Data Analysis Framework ROOT, is very well adapted to this approach. It allows diversified sampling techniques (SRS, LHS, qMC), fitting tools (neuronal networks...) and optimization techniques (genetic algorithm). Also data-base management and visualization are made very easy. In this paper, we'll present the various implementing steps of this core physics tool where neutronics, thermo-hydraulics, and fuel mechanics codes are run simultaneously. A relevant example of optimization of nuclear reactor safety characteristics will be presented. Also, flexibility of URANIE tool will be illustrated with the presentation of several approaches to improve Pareto front quality. (author)

  5. Graphics supercomputer for computational fluid dynamics research

    Science.gov (United States)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  6. FPS scientific and supercomputers computers in chemistry

    International Nuclear Information System (INIS)

    Curington, I.J.

    1987-01-01

    FPS Array Processors, scientific computers, and highly parallel supercomputers are used in nearly all aspects of compute-intensive computational chemistry. A survey is made of work utilizing this equipment, both published and current research. The relationship of the computer architecture to computational chemistry is discussed, with specific reference to Molecular Dynamics, Quantum Monte Carlo simulations, and Molecular Graphics applications. Recent installations of the FPS T-Series are highlighted, and examples of Molecular Graphics programs running on the FPS-5000 are shown

  7. Problem solving in nuclear engineering using supercomputers

    International Nuclear Information System (INIS)

    Schmidt, F.; Scheuermann, W.; Schatz, A.

    1987-01-01

    The availability of supercomputers enables the engineer to formulate new strategies for problem solving. One such strategy is the Integrated Planning and Simulation System (IPSS). With the integrated systems, simulation models with greater consistency and good agreement with actual plant data can be effectively realized. In the present work some of the basic ideas of IPSS are described as well as some of the conditions necessary to build such systems. Hardware and software characteristics as realized are outlined. (orig.) [de

  8. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    Science.gov (United States)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  9. A workbench for tera-flop supercomputing

    International Nuclear Information System (INIS)

    Resch, M.M.; Kuester, U.; Mueller, M.S.; Lang, U.

    2003-01-01

    Supercomputers currently reach a peak performance in the range of TFlop/s. With but one exception - the Japanese Earth Simulator - none of these systems has so far been able to also show a level of sustained performance for a variety of applications that comes close to the peak performance. Sustained TFlop/s are therefore rarely seen. The reasons are manifold and are well known: Bandwidth and latency both for main memory and for the internal network are the key internal technical problems. Cache hierarchies with large caches can bring relief but are no remedy to the problem. However, there are not only technical problems that inhibit the full exploitation by scientists of the potential of modern supercomputers. More and more organizational issues come to the forefront. This paper shows the approach of the High Performance Computing Center Stuttgart (HLRS) to deliver a sustained performance of TFlop/s for a wide range of applications from a large group of users spread over Germany. The core of the concept is the role of the data. Around this we design a simulation workbench that hides the complexity of interacting computers, networks and file systems from the user. (authors)

  10. PNNL supercomputer to become largest computing resource on the Grid

    CERN Multimedia

    2002-01-01

    Hewlett Packard announced that the US DOE Pacific Northwest National Laboratory will connect a 9.3-teraflop HP supercomputer to the DOE Science Grid. This will be the largest supercomputer attached to a computer grid anywhere in the world (1 page).

  11. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-01-01

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  12. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  13. Supercomputing Centers and Electricity Service Providers

    DEFF Research Database (Denmark)

    Patki, Tapasya; Bates, Natalie; Ghatikar, Girish

    2016-01-01

    from a detailed, quantitative survey-based analysis and compare the perspectives of the European grid and SCs to the ones of the United States (US). We then show that contrary to the expectation, SCs in the US are more open toward cooperating and developing demand-management strategies with their ESPs......Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates...... this problem. In order to develop a symbiotic relationship between the SCs and their ESPs and to support effective power management at all levels, it is critical to understand and analyze how the existing relationships were formed and how these are expected to evolve. In this paper, we first present results...

  14. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen-Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2018-05-15

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time and supports DMA functionality allowing for parallel processing message-passing.

  15. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  16. OpenMP Performance on the Columbia Supercomputer

    Science.gov (United States)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  17. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  18. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  19. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk (Kitware, Inc., Clifton Park, NY); Fabian, Nathan; Marion, Patrick (Kitware, Inc., Clifton Park, NY); Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  20. JINR supercomputer of the module type for event parallel analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    A model of a supercomputer with 50 million of operations per second is suggested. Its realization allows one to solve JINR data analysis problems for large spectrometers (in particular DELPHY collaboration). The suggested module supercomputer is based on 32-bit commercial available microprocessor with a processing rate of about 1 MFLOPS. The processors are combined by means of VME standard busbars. MicroVAX-11 is a host computer organizing the operation of the system. Data input and output is realized via microVAX-11 computer periphery. Users' software is based on the FORTRAN-77. The supercomputer is connected with a JINR net port and all JINR users get an access to the suggested system

  1. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jacobsen, Douglas W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ringler, Todd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  2. Comments on the parallelization efficiency of the Sunway TaihuLight supercomputer

    OpenAIRE

    Végh, János

    2016-01-01

    In the world of supercomputers, the large number of processors requires to minimize the inefficiencies of parallelization, which appear as a sequential part of the program from the point of view of Amdahl's law. The recently suggested new figure of merit is applied to the recently presented supercomputer, and the timeline of "Top 500" supercomputers is scrutinized using the metric. It is demonstrated, that in addition to the computing performance and power consumption, the new supercomputer i...

  3. Convex unwraps its first grown-up supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Manuel, T.

    1988-03-03

    Convex Computer Corp.'s new supercomputer family is even more of an industry blockbuster than its first system. At a tenfold jump in performance, it's far from just an incremental upgrade over its first minisupercomputer, the C-1. The heart of the new family, the new C-2 processor, churning at 50 million floating-point operations/s, spawns a group of systems whose performance could pass for some fancy supercomputers-namely those of the Cray Research Inc. family. When added to the C-1, Convex's five new supercomputers create the C series, a six-member product group offering a performance range from 20 to 200 Mflops. They mark an important transition for Convex from a one-product high-tech startup to a multinational company with a wide-ranging product line. It's a tough transition but the Richardson, Texas, company seems to be doing it. The extended product line propels Convex into the upper end of the minisupercomputer class and nudges it into the low end of the big supercomputers. It positions Convex in an uncrowded segment of the market in the $500,000 to $1 million range offering 50 to 200 Mflops of performance. The company is making this move because the minisuper area, which it pioneered, quickly became crowded with new vendors, causing prices and gross margins to drop drastically.

  4. QCD on the BlueGene/L Supercomputer

    International Nuclear Information System (INIS)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-01-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented

  5. QCD on the BlueGene/L Supercomputer

    Science.gov (United States)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-03-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.

  6. Supercomputers and the future of computational atomic scattering physics

    International Nuclear Information System (INIS)

    Younger, S.M.

    1989-01-01

    The advent of the supercomputer has opened new vistas for the computational atomic physicist. Problems of hitherto unparalleled complexity are now being examined using these new machines, and important connections with other fields of physics are being established. This talk briefly reviews some of the most important trends in computational scattering physics and suggests some exciting possibilities for the future. 7 refs., 2 figs

  7. Integration of Panda Workload Management System with supercomputers

    Science.gov (United States)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  8. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  9. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Wenaus, Torre

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for jo...

  10. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  11. Tryton Supercomputer Capabilities for Analysis of Massive Data Streams

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2015-09-01

    Full Text Available The recently deployed supercomputer Tryton, located in the Academic Computer Center of Gdansk University of Technology, provides great means for massive parallel processing. Moreover, the status of the Center as one of the main network nodes in the PIONIER network enables the fast and reliable transfer of data produced by miscellaneous devices scattered in the area of the whole country. The typical examples of such data are streams containing radio-telescope and satellite observations. Their analysis, especially with real-time constraints, can be challenging and requires the usage of dedicated software components. We propose a solution for such parallel analysis using the supercomputer, supervised by the KASKADA platform, which with the conjunction with immerse 3D visualization techniques can be used to solve problems such as pulsar detection and chronometric or oil-spill simulation on the sea surface.

  12. Visualizing quantum scattering on the CM-2 supercomputer

    International Nuclear Information System (INIS)

    Richardson, J.L.

    1991-01-01

    We implement parallel algorithms for solving the time-dependent Schroedinger equation on the CM-2 supercomputer. These methods are unconditionally stable as well as unitary at each time step and have the advantage of being spatially local and explicit. We show how to visualize the dynamics of quantum scattering using techniques for visualizing complex wave functions. Several scattering problems are solved to demonstrate the use of these methods. (orig.)

  13. Development of seismic tomography software for hybrid supercomputers

    Science.gov (United States)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on

  14. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  15. Intelligent Personal Supercomputer for Solving Scientific and Technical Problems

    Directory of Open Access Journals (Sweden)

    Khimich, O.M.

    2016-09-01

    Full Text Available New domestic intellіgent personal supercomputer of hybrid architecture Inparkom_pg for the mathematical modeling of processes in the defense industry, engineering, construction, etc. was developed. Intelligent software for the automatic research and tasks of computational mathematics with approximate data of different structures was designed. Applied software to provide mathematical modeling problems in construction, welding and filtration processes was implemented.

  16. Cellular-automata supercomputers for fluid-dynamics modeling

    International Nuclear Information System (INIS)

    Margolus, N.; Toffoli, T.; Vichniac, G.

    1986-01-01

    We report recent developments in the modeling of fluid dynamics, and give experimental results (including dynamical exponents) obtained using cellular automata machines. Because of their locality and uniformity, cellular automata lend themselves to an extremely efficient physical realization; with a suitable architecture, an amount of hardware resources comparable to that of a home computer can achieve (in the simulation of cellular automata) the performance of a conventional supercomputer

  17. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  18. Wavelet transform-vector quantization compression of supercomputer ocean model simulation output

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J N; Brislawn, C M

    1992-11-12

    We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.

  19. Research to application: Supercomputing trends for the 90's - Opportunities for interdisciplinary computations

    International Nuclear Information System (INIS)

    Shankar, V.

    1991-01-01

    The progression of supercomputing is reviewed from the point of view of computational fluid dynamics (CFD), and multidisciplinary problems impacting the design of advanced aerospace configurations are addressed. The application of full potential and Euler equations to transonic and supersonic problems in the 70s and early 80s is outlined, along with Navier-Stokes computations widespread during the late 80s and early 90s. Multidisciplinary computations currently in progress are discussed, including CFD and aeroelastic coupling for both static and dynamic flexible computations, CFD, aeroelastic, and controls coupling for flutter suppression and active control, and the development of a computational electromagnetics technology based on CFD methods. Attention is given to computational challenges standing in a way of the concept of establishing a computational environment including many technologies. 40 refs

  20. Applications of supercomputing and the utility industry: Calculation of power transfer capabilities

    International Nuclear Information System (INIS)

    Jensen, D.D.; Behling, S.R.; Betancourt, R.

    1990-01-01

    Numerical models and iterative simulation using supercomputers can furnish cost-effective answers to utility industry problems that are all but intractable using conventional computing equipment. An example of the use of supercomputers by the utility industry is the determination of power transfer capability limits for power transmission systems. This work has the goal of markedly reducing the run time of transient stability codes used to determine power distributions following major system disturbances. To date, run times of several hours on a conventional computer have been reduced to several minutes on state-of-the-art supercomputers, with further improvements anticipated to reduce run times to less than a minute. In spite of the potential advantages of supercomputers, few utilities have sufficient need for a dedicated in-house supercomputing capability. This problem is resolved using a supercomputer center serving a geographically distributed user base coupled via high speed communication networks

  1. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    Energy Technology Data Exchange (ETDEWEB)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S [Earth Sciences Department. Barcelona Supercomputing Center. Barcelona (Spain); Cuevas, E [Izanaa Atmospheric Research Center. Agencia Estatal de Meteorologia, Tenerife (Spain); Nickovic, S [Atmospheric Research and Environment Branch, World Meteorological Organization, Geneva (Switzerland)], E-mail: carlos.perez@bsc.es

    2009-03-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  2. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    International Nuclear Information System (INIS)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S; Cuevas, E; Nickovic, S

    2009-01-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  3. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  4. A fast random number generator for the Intel Paragon supercomputer

    Science.gov (United States)

    Gutbrod, F.

    1995-06-01

    A pseudo-random number generator is presented which makes optimal use of the architecture of the i860-microprocessor and which is expected to have a very long period. It is therefore a good candidate for use on the parallel supercomputer Paragon XP. In the assembler version, it needs 6.4 cycles for a real∗4 random number. There is a FORTRAN routine which yields identical numbers up to rare and minor rounding discrepancies, and it needs 28 cycles. The FORTRAN performance on other microprocessors is somewhat better. Arguments for the quality of the generator and some numerical tests are given.

  5. Development of a high performance eigensolver on the peta-scale next generation supercomputer system

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Yamada, Susumu; Machida, Masahiko

    2010-01-01

    For the present supercomputer systems, a multicore and multisocket processors are necessary to build a system, and choice of interconnection is essential. In addition, for effective development of a new code, high performance, scalable, and reliable numerical software is one of the key items. ScaLAPACK and PETSc are well-known software on distributed memory parallel computer systems. It is needless to say that highly tuned software towards new architecture like many-core processors must be chosen for real computation. In this study, we present a high-performance and high-scalable eigenvalue solver towards the next-generation supercomputer system, so called 'K-computer' system. We have developed two versions, the standard version (eigen s) and enhanced performance version (eigen sx), which are developed on the T2K cluster system housed at University of Tokyo. Eigen s employs the conventional algorithms; Householder tridiagonalization, divide and conquer (DC) algorithm, and Householder back-transformation. They are carefully implemented with blocking technique and flexible two-dimensional data-distribution to reduce the overhead of memory traffic and data transfer, respectively. Eigen s performs excellently on the T2K system with 4096 cores (theoretical peak is 37.6 TFLOPS), and it shows fine performance 3.0 TFLOPS with a two hundred thousand dimensional matrix. The enhanced version, eigen sx, uses more advanced algorithms; the narrow-band reduction algorithm, DC for band matrices, and the block Householder back-transformation with WY-representation. Even though this version is still on a test stage, it shows 4.7 TFLOPS with the same dimensional matrix on eigen s. (author)

  6. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    Science.gov (United States)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  7. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-05-15

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation.

  8. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    International Nuclear Information System (INIS)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho

    2008-01-01

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation

  9. Supercomputer algorithms for reactivity, dynamics and kinetics of small molecules

    International Nuclear Information System (INIS)

    Lagana, A.

    1989-01-01

    Even for small systems, the accurate characterization of reactive processes is so demanding of computer resources as to suggest the use of supercomputers having vector and parallel facilities. The full advantages of vector and parallel architectures can sometimes be obtained by simply modifying existing programs, vectorizing the manipulation of vectors and matrices, and requiring the parallel execution of independent tasks. More often, however, a significant time saving can be obtained only when the computer code undergoes a deeper restructuring, requiring a change in the computational strategy or, more radically, the adoption of a different theoretical treatment. This book discusses supercomputer strategies based upon act and approximate methods aimed at calculating the electronic structure and the reactive properties of small systems. The book shows how, in recent years, intense design activity has led to the ability to calculate accurate electronic structures for reactive systems, exact and high-level approximations to three-dimensional reactive dynamics, and to efficient directive and declaratory software for the modelling of complex systems

  10. The TeraGyroid Experiment – Supercomputing 2003

    Directory of Open Access Journals (Sweden)

    R.J. Blake

    2005-01-01

    Full Text Available Amphiphiles are molecules with hydrophobic tails and hydrophilic heads. When dispersed in solvents, they self assemble into complex mesophases including the beautiful cubic gyroid phase. The goal of the TeraGyroid experiment was to study defect pathways and dynamics in these gyroids. The UK's supercomputing and USA's TeraGrid facilities were coupled together, through a dedicated high-speed network, into a single computational Grid for research work that peaked around the Supercomputing 2003 conference. The gyroids were modeled using lattice Boltzmann methods with parameter spaces explored using many 1283 and 3grid point simulations, this data being used to inform the world's largest three-dimensional time dependent simulation with 10243-grid points. The experiment generated some 2 TBytes of useful data. In terms of Grid technology the project demonstrated the migration of simulations (using Globus middleware to and fro across the Atlantic exploiting the availability of resources. Integration of the systems accelerated the time to insight. Distributed visualisation of the output datasets enabled the parameter space of the interactions within the complex fluid to be explored from a number of sites, informed by discourse over the Access Grid. The project was sponsored by EPSRC (UK and NSF (USA with trans-Atlantic optical bandwidth provided by British Telecommunications.

  11. KfK-seminar series on supercomputing und visualization from May till September 1992

    International Nuclear Information System (INIS)

    Hohenhinnebusch, W.

    1993-05-01

    During the period of may 1992 to september 1992 a series of seminars was held at KfK on several topics of supercomputing in different fields of application. The aim was to demonstrate the importance of supercomputing and visualization in numerical simulations of complex physical and technical phenomena. This report contains the collection of all submitted seminar papers. (orig./HP) [de

  12. Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data

    Science.gov (United States)

    Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.

    2018-03-01

    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.

  13. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  14. SUPERCOMPUTER SIMULATION OF CRITICAL PHENOMENA IN COMPLEX SOCIAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Petrus M.A. Sloot

    2014-09-01

    Full Text Available The paper describes a problem of computer simulation of critical phenomena in complex social systems on a petascale computing systems in frames of complex networks approach. The three-layer system of nested models of complex networks is proposed including aggregated analytical model to identify critical phenomena, detailed model of individualized network dynamics and model to adjust a topological structure of a complex network. The scalable parallel algorithm covering all layers of complex networks simulation is proposed. Performance of the algorithm is studied on different supercomputing systems. The issues of software and information infrastructure of complex networks simulation are discussed including organization of distributed calculations, crawling the data in social networks and results visualization. The applications of developed methods and technologies are considered including simulation of criminal networks disruption, fast rumors spreading in social networks, evolution of financial networks and epidemics spreading.

  15. Cooperative visualization and simulation in a supercomputer environment

    International Nuclear Information System (INIS)

    Ruehle, R.; Lang, U.; Wierse, A.

    1993-01-01

    The article takes a closer look on the requirements being imposed by the idea to integrate all the components into a homogeneous software environment. To this end several methods for the distribtuion of applications in dependence of certain problem types are discussed. The currently available methods at the University of Stuttgart Computer Center for the distribution of applications are further explained. Finally the aims and characteristics of a European sponsored project, called PAGEIN, are explained, which fits perfectly into the line of developments at RUS. The aim of the project is to experiment with future cooperative working modes of aerospace scientists in a high speed distributed supercomputing environment. Project results will have an impact on the development of real future scientific application environments. (orig./DG)

  16. Lectures in Supercomputational Neurosciences Dynamics in Complex Brain Networks

    CERN Document Server

    Graben, Peter beim; Thiel, Marco; Kurths, Jürgen

    2008-01-01

    Computational Neuroscience is a burgeoning field of research where only the combined effort of neuroscientists, biologists, psychologists, physicists, mathematicians, computer scientists, engineers and other specialists, e.g. from linguistics and medicine, seem to be able to expand the limits of our knowledge. The present volume is an introduction, largely from the physicists' perspective, to the subject matter with in-depth contributions by system neuroscientists. A conceptual model for complex networks of neurons is introduced that incorporates many important features of the real brain, such as various types of neurons, various brain areas, inhibitory and excitatory coupling and the plasticity of the network. The computational implementation on supercomputers, which is introduced and discussed in detail in this book, will enable the readers to modify and adapt the algortihm for their own research. Worked-out examples of applications are presented for networks of Morris-Lecar neurons to model the cortical co...

  17. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  18. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu

    2013-07-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  19. nas

    Directory of Open Access Journals (Sweden)

    Modesto Varas

    Full Text Available Introduction and objective: pancreatic endocrine tumors (PET are difficult to diagnose. Their accurate localization using imaging techniques is intended to provide a definite cure. The goal of this retrospective study was to review a PET series from a private institution. Patients and methods: the medical records of 19 patients with PETs were reviewed, including 4 cases of MEN-1, for a period of 17 years (1994-2010. A database was set up with ten parameters: age, sex, symptoms, imaging techniques, size and location in the pancreas, metastasis, surgery, complications, adjuvant therapies, definite diagnosis, and survival or death. Results: a total of 19 cases were analyzed. Mean age at presentation was 51 years (range: 26-67 y (14 males, 5 females, and tumor size was 5 to 80 mm (X: 20 mm. Metastatic disease was present in 37% (7/19. Most underwent the following imaging techniques: ultrasounds, computed tomography (CT and magnetic resonance imaging (MRI. Fine needle aspiration punction (FNA was performed for the primary tumor in 4 cases. Non-functioning: 7 cases (37%, insulinoma: 2 cases [1 with possible multiple endocrine neoplasia (MEN], Zollinger-Ellison syndrome (ZES from gastrinoma: 5 (3 with MEN-1, glucagonoma: 2 cases, 2 somatostatinomas; carcinoid: 1 case with carcinoide-like syndrome. Most patients were operated upon: 14/19 (73%. Four (4/14: 28% has postoperative complications following pancreatectomy: pancreatitis, pseudocyst, and abdominal collections. Some patients received chemotherapy (4, somatostatin (3 and interferon (2 before or after surgery. Median follow-up was 48 months. Actuarial survival during the study was 73.6% (14/19. Conclusions: age was similar to that described in the literature. Males were predominant. Most cases were non-functioning (37%. Most patients underwent surgery (73% with little morbidity (28% and an actuarial survival of 73.6% at the time of the study.

  20. Integration of Titan supercomputer at OLCF with ATLAS Production System

    Science.gov (United States)

    Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to

  1. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Supercomputing plays a major role in many areas of science and engineering, and it has had tremendous impact for decades in areas such as aerospace, defense, energy,...

  2. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  3. SUPERCOMPUTERS FOR AIDING ECONOMIC PROCESSES WITH REFERENCE TO THE FINANCIAL SECTOR

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2014-12-01

    Full Text Available The article discusses the use of supercomputers to support business processes with particular emphasis on the financial sector. A reference was made to the selected projects that support economic development. In particular, we propose the use of supercomputers to perform artificial intel-ligence methods in banking. The proposed methods combined with modern technology enables a significant increase in the competitiveness of enterprises and banks by adding new functionality.

  4. Adventures in supercomputing: An innovative program for high school teachers

    Energy Technology Data Exchange (ETDEWEB)

    Oliver, C.E.; Hicks, H.R.; Summers, B.G. [Oak Ridge National Lab., TN (United States); Staten, D.G. [Wartburg Central High School, TN (United States)

    1994-12-31

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology. Adventures in Supercomputing (AiS), sponsored by the U.S. Department of Energy (DOE), is such a program. It is a program for high school teachers that changes the teacher paradigm from a teacher-directed approach of teaching to a student-centered approach. {open_quotes}A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode{close_quotes}. Not only is the process of teaching changed, but the cross-curricula integration within the AiS materials is remarkable. Written from a teacher`s perspective, this paper will describe the AiS program and its effects on teachers and students, primarily at Wartburg Central High School, in Wartburg, Tennessee. The AiS program in Tennessee is sponsored by Oak Ridge National Laboratory (ORNL).

  5. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Directory of Open Access Journals (Sweden)

    De K.

    2016-01-01

    Full Text Available The Large Hadron Collider (LHC, operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed AnalysisWorkload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF, is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF’s Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  6. Symbolic simulation of engineering systems on a supercomputer

    International Nuclear Information System (INIS)

    Ragheb, M.; Gvillo, D.; Makowitz, H.

    1986-01-01

    Model-Based Production-Rule systems for analysis are developed for the symbolic simulation of Complex Engineering systems on a CRAY X-MP Supercomputer. The Fault-Tree and Event-Tree Analysis methodologies from Systems-Analysis are used for problem representation and are coupled to the Rule-Based System Paradigm from Knowledge Engineering to provide modelling of engineering devices. Modelling is based on knowledge of the structure and function of the device rather than on human expertise alone. To implement the methodology, we developed a production-Rule Analysis System that uses both backward-chaining and forward-chaining: HAL-1986. The inference engine uses an Induction-Deduction-Oriented antecedent-consequent logic and is programmed in Portable Standard Lisp (PSL). The inference engine is general and can accommodate general modifications and additions to the knowledge base. The methodologies used will be demonstrated using a model for the identification of faults, and subsequent recovery from abnormal situations in Nuclear Reactor Safety Analysis. The use of the exposed methodologies for the prognostication of future device responses under operational and accident conditions using coupled symbolic and procedural programming is discussed

  7. Micro-mechanical Simulations of Soils using Massively Parallel Supercomputers

    Directory of Open Access Journals (Sweden)

    David W. Washington

    2004-06-01

    Full Text Available In this research a computer program, Trubal version 1.51, based on the Discrete Element Method was converted to run on a Connection Machine (CM-5,a massively parallel supercomputer with 512 nodes, to expedite the computational times of simulating Geotechnical boundary value problems. The dynamic memory algorithm in Trubal program did not perform efficiently in CM-2 machine with the Single Instruction Multiple Data (SIMD architecture. This was due to the communication overhead involving global array reductions, global array broadcast and random data movement. Therefore, a dynamic memory algorithm in Trubal program was converted to a static memory arrangement and Trubal program was successfully converted to run on CM-5 machines. The converted program was called "TRUBAL for Parallel Machines (TPM." Simulating two physical triaxial experiments and comparing simulation results with Trubal simulations validated the TPM program. With a 512 nodes CM-5 machine TPM produced a nine-fold speedup demonstrating the inherent parallelism within algorithms based on the Discrete Element Method.

  8. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Directory of Open Access Journals (Sweden)

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  9. Using the LANSCE irradiation facility to predict the number of fatal soft errors in one of the world's fastest supercomputers

    International Nuclear Information System (INIS)

    Michalak, S.E.; Harris, K.W.; Hengartner, N.W.; Takala, B.E.; Wender, S.A.

    2005-01-01

    Los Alamos National Laboratory (LANL) is home to the Los Alamos Neutron Science Center (LANSCE). LANSCE is a unique facility because its neutron spectrum closely mimics the neutron spectrum at terrestrial and aircraft altitudes, but is many times more intense. Thus, LANSCE provides an ideal setting for accelerated testing of semiconductor and other devices that are susceptible to cosmic ray induced neutrons. Many industrial companies use LANSCE to estimate device susceptibility to cosmic ray induced neutrons, and it has also been used to test parts from one of LANL's supercomputers, the ASC (Advanced Simulation and Computing Program) Q. This paper discusses our use of the LANSCE facility to study components in Q including a comparison with failure data from Q

  10. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone; Manzano Franco, Joseph B.

    2012-12-31

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, we introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.

  11. UAS-NAS Stakeholder Feedback Report

    Science.gov (United States)

    Randall, Debra; Murphy, Jim; Grindle, Laurie

    2016-01-01

    The need to fly UAS in the NAS to perform missions of vital importance to national security and defense, emergency management, science, and to enable commercial applications has been continually increasing over the past few years. To address this need, the NASA Aeronautics Research Mission Directorate (ARMD) Integrated Aviation Systems Program (IASP) formulated and funded the Unmanned Aircraft Systems (UAS) Integration in the National Airspace System (NAS) Project (hereafter referred to as UAS-NAS Project) from 2011 to 2016. The UAS-NAS Project identified the following need statement: The UAS community needs routine access to the global airspace for all classes of UAS. The Project identified the following goal: To provide research findings to reduce technical barriers associated with integrating UAS into the NAS utilizing integrated system level tests in a relevant environment. This report provides a summary of the collaborations between the UAS-NAS Project and its primary stakeholders and how the Project applied and incorporated the feedback.

  12. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to

  13. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  14. Guide to dataflow supercomputing basic concepts, case studies, and a detailed example

    CERN Document Server

    Milutinovic, Veljko; Trifunovic, Nemanja; Giorgi, Roberto

    2015-01-01

    This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; revie

  15. Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers

    KAUST Repository

    Wu, Xingfu; Duan, Benchun; Taylor, Valerie

    2011-01-01

    , such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular

  16. NAS Decadal Review Town Hall

    Science.gov (United States)

    The National Academies of Sciences, Engineering and Medicine is seeking community input for a study on the future of materials research (MR). Frontiers of Materials Research: A Decadal Survey will look at defining the frontiers of materials research ranging from traditional materials science and engineering to condensed matter physics. Please join members of the study committee for a town hall to discuss future directions for materials research in the United States in the context of worldwide efforts. In particular, input on the following topics will be of great value: progress, achievements, and principal changes in the R&D landscape over the past decade; identification of key MR areas that have major scientific gaps or offer promising investment opportunities from 2020-2030; and the challenges that MR may face over the next decade and how those challenges might be addressed. This study was requested by the Department of Energy and the National Science Foundation. The National Academies will issue a report in 2018 that will offer guidance to federal agencies that support materials research, science policymakers, and researchers in materials research and other adjoining fields. Learn more about the study at http://nas.edu/materials.

  17. Interactive real-time nuclear plant simulations on a UNIX based supercomputer

    International Nuclear Information System (INIS)

    Behling, S.R.

    1990-01-01

    Interactive real-time nuclear plant simulations are critically important to train nuclear power plant engineers and operators. In addition, real-time simulations can be used to test the validity and timing of plant technical specifications and operational procedures. To accurately and confidently simulate a nuclear power plant transient in real-time, sufficient computer resources must be available. Since some important transients cannot be simulated using preprogrammed responses or non-physical models, commonly used simulation techniques may not be adequate. However, the power of a supercomputer allows one to accurately calculate the behavior of nuclear power plants even during very complex transients. Many of these transients can be calculated in real-time or quicker on the fastest supercomputers. The concept of running interactive real-time nuclear power plant transients on a supercomputer has been tested. This paper describes the architecture of the simulation program, the techniques used to establish real-time synchronization, and other issues related to the use of supercomputers in a new and potentially very important area. (author)

  18. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    International Nuclear Information System (INIS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-01-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers. (paper)

  19. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  20. Argonne National Lab deploys Force10 networks' massively dense ethernet switch for supercomputing cluster

    CERN Multimedia

    2003-01-01

    "Force10 Networks, Inc. today announced that Argonne National Laboratory (Argonne, IL) has successfully deployed Force10 E-Series switch/routers to connect to the TeraGrid, the world's largest supercomputing grid, sponsored by the National Science Foundation (NSF)" (1/2 page).

  1. Design and performance characterization of electronic structure calculations on massively parallel supercomputers

    DEFF Research Database (Denmark)

    Romero, N. A.; Glinsvad, Christian; Larsen, Ask Hjorth

    2013-01-01

    Density function theory (DFT) is the most widely employed electronic structure method because of its favorable scaling with system size and accuracy for a broad range of molecular and condensed-phase systems. The advent of massively parallel supercomputers has enhanced the scientific community...

  2. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  3. An efficient implementation of a backpropagation learning algorithm on quadrics parallel supercomputer

    International Nuclear Information System (INIS)

    Taraglio, S.; Massaioli, F.

    1995-08-01

    A parallel implementation of a library to build and train Multi Layer Perceptrons via the Back Propagation algorithm is presented. The target machine is the SIMD massively parallel supercomputer Quadrics. Performance measures are provided on three different machines with different number of processors, for two network examples. A sample source code is given

  4. Visualization environment of the large-scale data of JAEA's supercomputer system

    Energy Technology Data Exchange (ETDEWEB)

    Sakamoto, Kensaku [Japan Atomic Energy Agency, Center for Computational Science and e-Systems, Tokai, Ibaraki (Japan); Hoshi, Yoshiyuki [Research Organization for Information Science and Technology (RIST), Tokai, Ibaraki (Japan)

    2013-11-15

    On research and development of various fields of nuclear energy, visualization of calculated data is especially useful to understand the result of simulation in an intuitive way. Many researchers who run simulations on the supercomputer in Japan Atomic Energy Agency (JAEA) are used to transfer calculated data files from the supercomputer to their local PCs for visualization. In recent years, as the size of calculated data has gotten larger with improvement of supercomputer performance, reduction of visualization processing time as well as efficient use of JAEA network is being required. As a solution, we introduced a remote visualization system which has abilities to utilize parallel processors on the supercomputer and to reduce the usage of network resources by transferring data of intermediate visualization process. This paper reports a study on the performance of image processing with the remote visualization system. The visualization processing time is measured and the influence of network speed is evaluated by varying the drawing mode, the size of visualization data and the number of processors. Based on this study, a guideline for using the remote visualization system is provided to show how the system can be used effectively. An upgrade policy of the next system is also shown. (author)

  5. Advances in Supercomputing for the Modeling of Atomic Processes in Plasmas

    International Nuclear Information System (INIS)

    Ludlow, J. A.; Ballance, C. P.; Loch, S. D.; Lee, T. G.; Pindzola, M. S.; Griffin, D. C.; McLaughlin, B. M.; Colgan, J.

    2009-01-01

    An overview will be given of recent atomic and molecular collision methods developed to take advantage of modern massively parallel computers. The focus will be on direct solutions of the time-dependent Schroedinger equation for simple systems using large numerical lattices, as found in the time-dependent close-coupling method, and for configuration interaction solutions of the time-independent Schroedinger equation for more complex systems using large numbers of basis functions, as found in the R-matrix with pseudo-states method. Results from these large scale calculations are extremely useful in benchmarking less accurate theoretical methods and experimental data. To take full advantage of future petascale and exascale computing resources, it appears that even finer grain parallelism will be needed.

  6. Simultaneous Estimation of Hydrochlorothiazide, Hydralazine Hydrochloride, and Reserpine Using PCA, NAS, and NAS-PCA.

    Science.gov (United States)

    Sharma, Chetan; Badyal, Pragya Nand; Rawal, Ravindra K

    2015-01-01

    In this study, new and feasible UV-visible spectrophotometric and multivariate spectrophotometric methods were described for the simultaneous determination of hydrochlorothiazide (HCTZ), hydralazine hydrochloride (H.HCl), and reserpine (RES) in combined pharmaceutical tablets. Methanol was used as a solvent for analysis and the whole UV region was scanned from 200-400 nm. The resolution was obtained by using multivariate methods such as the net analyte signal method (NAS), principal component analysis (PCA), and net analyte signal-principal component analysis (NAS-PCA) applied to the UV spectra of the mixture. The results obtained from all of the three methods were compared. NAS-PCA showed a lot of resolved data as compared to NAS and PCA. Thus, the NAS-PCA technique is a combination of NAS and PCA methods which is advantageous to obtain the information from overlapping results.

  7. Benchmarking and tuning the MILC code on clusters and supercomputers

    International Nuclear Information System (INIS)

    Gottlieb, Steven

    2002-01-01

    Recently, we have benchmarked and tuned the MILC code on a number of architectures including Intel Itanium and Pentium IV (PIV), dual-CPU Athlon, and the latest Compaq Alpha nodes. Results will be presented for many of these, and we shall discuss some simple code changes that can result in a very dramatic speedup of the KS conjugate gradient on processors with more advanced memory systems such as PIV, IBM SP and Alpha

  8. Benchmarking and tuning the MILC code on clusters and supercomputers

    International Nuclear Information System (INIS)

    Steven A. Gottlieb

    2001-01-01

    Recently, we have benchmarked and tuned the MILC code on a number of architectures including Intel Itanium and Pentium IV (PIV), dual-CPU Athlon, and the latest Compaq Alpha nodes. Results will be presented for many of these, and we shall discuss some simple code changes that can result in a very dramatic speedup of the KS conjugate gradient on processors with more advanced memory systems such as PIV, IBM SP and Alpha

  9. Benchmarking and tuning the MILC code on clusters and supercomputers

    Science.gov (United States)

    Gottlieb, Steven

    2002-03-01

    Recently, we have benchmarked and tuned the MILC code on a number of architectures including Intel Itanium and Pentium IV (PIV), dual-CPU Athlon, and the latest Compaq Alpha nodes. Results will be presented for many of these, and we shall discuss some simple code changes that can result in a very dramatic speedup of the KS conjugate gradient on processors with more advanced memory systems such as PIV, IBM SP and Alpha.

  10. Advanced computers and simulation

    International Nuclear Information System (INIS)

    Ryne, R.D.

    1993-01-01

    Accelerator physicists today have access to computers that are far more powerful than those available just 10 years ago. In the early 1980's, desktop workstations performed less one million floating point operations per second (Mflops), and the realized performance of vector supercomputers was at best a few hundred Mflops. Today vector processing is available on the desktop, providing researchers with performance approaching 100 Mflops at a price that is measured in thousands of dollars. Furthermore, advances in Massively Parallel Processors (MPP) have made performance of over 10 gigaflops a reality, and around mid-decade MPPs are expected to be capable of teraflops performance. Along with advances in MPP hardware, researchers have also made significant progress in developing algorithms and software for MPPS. These changes have had, and will continue to have, a significant impact on the work of computational accelerator physicists. Now, instead of running particle simulations with just a few thousand particles, we can perform desktop simulations with tens of thousands of simulation particles, and calculations with well over 1 million particles are being performed on MPPs. In the area of computational electromagnetics, simulations that used to be performed only on vector supercomputers now run in several hours on desktop workstations, and researchers are hoping to perform simulations with over one billion mesh points on future MPPs. In this paper we will discuss the latest advances, and what can be expected in the near future, in hardware, software and applications codes for advanced simulation of particle accelerators

  11. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  12. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; De, K; Oleynik, D; Jha, S; Wells, J

    2016-01-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  13. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  14. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    International Nuclear Information System (INIS)

    Cabrillo, I; Cabellos, L; Marco, J; Fernandez, J; Gonzalez, I

    2014-01-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  15. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    Science.gov (United States)

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  16. Explaining the gap between theoretical peak performance and real performance for supercomputer architectures

    International Nuclear Information System (INIS)

    Schoenauer, W.; Haefner, H.

    1993-01-01

    The basic architectures of vector and parallel computers with their properties are presented. Then the memory size and the arithmetic operations in the context of memory bandwidth are discussed. For the exemplary discussion of a single operation micro-measurements of the vector triad for the IBM 3090 VF and the CRAY Y-MP/8 are presented. They reveal the details of the losses for a single operation. Then we analyze the global performance of a whole supercomputer by identifying reduction factors that bring down the theoretical peak performance to the poor real performance. The responsibilities of the manufacturer and of the user for these losses are dicussed. Then the price-performance ratio for different architectures in a snapshot of January 1991 is briefly mentioned. Finally some remarks to a user-friendly architecture for a supercomputer will be made. (orig.)

  17. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry; Etienne, Vincent; Gashawbeza, Ewenet; Curiel, Emesto Sandoval; Khan, Azizur; Feki, Saber; Kortas, Samuel

    2017-01-01

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey's acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less than

  18. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry

    2017-02-27

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey\\'s acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less

  19. Application of Supercomputer Technologies for Simulation Of Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Vladimir Valentinovich Okrepilov

    2015-06-01

    Full Text Available To date, an extensive experience has been accumulated in investigation of problems related to quality, assessment of management systems, modeling of economic system sustainability. The performed studies have created a basis for development of a new research area — Economics of Quality. Its tools allow to use opportunities of model simulation for construction of the mathematical models adequately reflecting the role of quality in natural, technical, social regularities of functioning of the complex socio-economic systems. Extensive application and development of models, and also system modeling with use of supercomputer technologies, on our deep belief, will bring the conducted research of socio-economic systems to essentially new level. Moreover, the current scientific research makes a significant contribution to model simulation of multi-agent social systems and that is not less important, it belongs to the priority areas in development of science and technology in our country. This article is devoted to the questions of supercomputer technologies application in public sciences, first of all, — regarding technical realization of the large-scale agent-focused models (AFM. The essence of this tool is that owing to the power computer increase it has become possible to describe the behavior of many separate fragments of a difficult system, as socio-economic systems are. The article also deals with the experience of foreign scientists and practicians in launching the AFM on supercomputers, and also the example of AFM developed in CEMI RAS, stages and methods of effective calculating kernel display of multi-agent system on architecture of a modern supercomputer will be analyzed. The experiments on the basis of model simulation on forecasting the population of St. Petersburg according to three scenarios as one of the major factors influencing the development of socio-economic system and quality of life of the population are presented in the

  20. Heat dissipation computations of a HVDC ground electrode using a supercomputer

    International Nuclear Information System (INIS)

    Greiss, H.; Mukhedkar, D.; Lagace, P.J.

    1990-01-01

    This paper reports on the temperature, of soil surrounding a High Voltage Direct Current (HVDC) toroidal ground electrode of practical dimensions, in both homogeneous and non-homogeneous soils that was computed at incremental points in time using finite difference methods on a supercomputer. Curves of the response were computed and plotted at several locations within the soil in the vicinity of the ground electrode for various values of the soil parameters

  1. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Esteban [University of Pittsburgh; Ni, Xiang [University of Illinois at Urbana-Champaign; Jones, Terry R [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  2. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  3. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  4. Lie. kū́nas

    Directory of Open Access Journals (Sweden)

    Simas Karaliūnas

    2011-12-01

    Full Text Available LIТН. kū́nas “BODY”SummaryThe cognates of Lith. kū́nas “body” and Latv. kûnis (kи̃пе, kũņа “body; chrysalis; caterpil­lar of a butterfly; bee pupae” are supposed to be Lith. kūпа “carrion”, pa-kū́nė “sore, furuncle; upper lamella, a layer under the roots”, Latv. kипа “wart, excrescence”, kunis “bottom of a sheaf” and others. Lith. kū́nas, kūпа may represent substantivized forms of the adjective Latv. kûns“round, obese, stout”, while Latv. kûnis, kũņа, kūne seem to be derivatives of the suffixes *-o-*-ā-, *-ē-.

  5. Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-01-01

    Earthquakes are one of the most destructive natural hazards on our planet Earth. Hugh earthquakes striking offshore may cause devastating tsunamis, as evidenced by the 11 March 2011 Japan (moment magnitude Mw9.0) and the 26 December 2004 Sumatra (Mw9.1) earthquakes. Earthquake prediction (in terms of the precise time, place, and magnitude of a coming earthquake) is arguably unfeasible in the foreseeable future. To mitigate seismic hazards from future earthquakes in earthquake-prone areas, such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular, ground motion simulations for past and future (possible) significant earthquakes have been performed to understand factors that affect ground shaking in populated areas, and to provide ground shaking characteristics and synthetic seismograms for emergency preparation and design of earthquake-resistant structures. These simulation results can guide the development of more rational seismic provisions for leading to safer, more efficient, and economical50pt]Please provide V. Taylor author e-mail ID. structures in earthquake-prone regions.

  6. ATLAS FTK a - very complex - custom parallel supercomputer

    CERN Document Server

    Kimura, Naoki; The ATLAS collaboration

    2016-01-01

    In the ever increasing pile-up LHC environment advanced techniques of analysing the data are implemented in order to increase the rate of relevant physics processes with respect to background processes. The Fast TracKer (FTK) is a track finding implementation at hardware level that is designed to deliver full-scan tracks with $p_{T}$ above 1GeV to the ATLAS trigger system for every L1 accept (at a maximum rate of 100kHz). In order to achieve this performance a highly parallel system was designed and now it is under installation in ATLAS. In the beginning of 2016 it will provide tracks for the trigger system in a region covering the central part of the ATLAS detector, and during the year it's coverage will be extended to the full detector coverage. The system relies on matching hits coming from the silicon tracking detectors against 1 billion patterns stored in specially designed ASICS chips (Associative memory - AM06). In a first stage coarse resolution hits are matched against the patterns and the accepted h...

  7. Proteínas: redefiniendo algunos conceptos

    Directory of Open Access Journals (Sweden)

    Juan Camilo Calderon Vélez

    2006-04-01

    Full Text Available El conocimiento sobre las estructuras primarias, secundarias y terciarias de las proteínas crece cada día; la terminología y su adecuado uso, incluso para los conocedores, pueden resultar confusos. Se propone en esta comunicación una forma sencilla y práctica de abordar el tema.

  8. Integration of PanDA workload management system with Titan supercomputer at OLCF

    Science.gov (United States)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  9. Re-inventing electromagnetics - Supercomputing solution of Maxwell's equations via direct time integration on space grids

    International Nuclear Information System (INIS)

    Taflove, A.

    1992-01-01

    This paper summarizes the present state and future directions of applying finite-difference and finite-volume time-domain techniques for Maxwell's equations on supercomputers to model complex electromagnetic wave interactions with structures. Applications so far have been dominated by radar cross section technology, but by no means are limited to this area. In fact, the gains we have made place us on the threshold of being able to make tremendous contributions to non-defense electronics and optical technology. Some of the most interesting research in these commercial areas is summarized. 47 refs

  10. Watson will see you now: a supercomputer to help clinicians make informed treatment decisions.

    Science.gov (United States)

    Doyle-Lindrud, Susan

    2015-02-01

    IBM has collaborated with several cancer care providers to develop and train the IBM supercomputer Watson to help clinicians make informed treatment decisions. When a patient is seen in clinic, the oncologist can input all of the clinical information into the computer system. Watson will then review all of the data and recommend treatment options based on the latest evidence and guidelines. Once the oncologist makes the treatment decision, this information can be sent directly to the insurance company for approval. Watson has the ability to standardize care and accelerate the approval process, a benefit to the healthcare provider and the patient.

  11. A SMART NAS Toolkit for Optimality Metrics Overlay, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation proposed is a plug-and-play module for NASA's proposed SMART NAS (Shadow Mode Assessment using Realistic Technologies for the NAS) system that...

  12. Notas sobre o fantasma nas toxicomanias

    Directory of Open Access Journals (Sweden)

    Walter Firmo de Oliveira Cruz

    Full Text Available O presente artigo foi apresentado na Jornada Clínica da Associação Psicanalítica de Porto Alegre - "A direção da cura nas toxicomanias: o sujeito em questão em outubro de 2003. Através da discussão de um caso clínico, busca evidenciar a importância da relação existente entre a fantasmática do sujeito e a escolha do objeto nas toxicomanias. Aborda ainda a toxicomania como sintoma da contemporaneidade, bem como traços da estética que a compõe.

  13. Grassroots Supercomputing

    CERN Multimedia

    Buchanan, Mark

    2005-01-01

    What started out as a way for SETI to plow through its piles or radio-signal data from deep space has turned into a powerful research tool as computer users acrosse the globe donate their screen-saver time to projects as diverse as climate-change prediction, gravitational-wave searches, and protein folding (4 pages)

  14. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  15. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    Science.gov (United States)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  16. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00300320; Klimentov, Alexei; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Vaniachine, Alexandre; Wenaus, Torre; Schovancova, Jaroslava

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real time, information about unused...

  17. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration; Klimentov, Alexei; Oleynik, Danila; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently uses more than 100,000 cores at well over 100 Grid sites with a peak performance of 0.3 petaFLOPS, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real tim...

  18. Feynman diagrams sampling for quantum field theories on the QPACE 2 supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Rappl, Florian

    2016-08-01

    This work discusses the application of Feynman diagram sampling in quantum field theories. The method uses a computer simulation to sample the diagrammatic space obtained in a series expansion. For running large physical simulations powerful computers are obligatory, effectively splitting the thesis in two parts. The first part deals with the method of Feynman diagram sampling. Here the theoretical background of the method itself is discussed. Additionally, important statistical concepts and the theory of the strong force, quantum chromodynamics, are introduced. This sets the context of the simulations. We create and evaluate a variety of models to estimate the applicability of diagrammatic methods. The method is then applied to sample the perturbative expansion of the vertex correction. In the end we obtain the value for the anomalous magnetic moment of the electron. The second part looks at the QPACE 2 supercomputer. This includes a short introduction to supercomputers in general, as well as a closer look at the architecture and the cooling system of QPACE 2. Guiding benchmarks of the InfiniBand network are presented. At the core of this part, a collection of best practices and useful programming concepts are outlined, which enables the development of efficient, yet easily portable, applications for the QPACE 2 system.

  19. Use of high performance networks and supercomputers for real-time flight simulation

    Science.gov (United States)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  20. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; D' Azevedo, Eduardo [ORNL; Philip, Bobby [ORNL; Worley, Patrick H [ORNL

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  1. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  2. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, Clyde [Northern Illinois Univ., DeKalb, IL (United States); Karonis, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Lurio, Laurence [Northern Illinois Univ., DeKalb, IL (United States); Piot, Philippe [Northern Illinois Univ., DeKalb, IL (United States); Xiao, Zhili [Northern Illinois Univ., DeKalb, IL (United States); Glatz, Andreas [Northern Illinois Univ., DeKalb, IL (United States); Pohlman, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Hou, Minmei [Northern Illinois Univ., DeKalb, IL (United States); Demir, Veysel [Northern Illinois Univ., DeKalb, IL (United States); Song, Jie [Northern Illinois Univ., DeKalb, IL (United States); Duffin, Kirk [Northern Illinois Univ., DeKalb, IL (United States); Johns, Mitrick [Northern Illinois Univ., DeKalb, IL (United States); Sims, Thomas [Northern Illinois Univ., DeKalb, IL (United States); Yin, Yanbin [Northern Illinois Univ., DeKalb, IL (United States)

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  3. Computational Science with the Titan Supercomputer: Early Outcomes and Lessons Learned

    Science.gov (United States)

    Wells, Jack

    2014-03-01

    Modeling and simulation with petascale computing has supercharged the process of innovation and understanding, dramatically accelerating time-to-insight and time-to-discovery. This presentation will focus on early outcomes from the Titan supercomputer at the Oak Ridge National Laboratory. Titan has over 18,000 hybrid compute nodes consisting of both CPUs and GPUs. In this presentation, I will discuss the lessons we have learned in deploying Titan and preparing applications to move from conventional CPU architectures to a hybrid machine. I will present early results of materials applications running on Titan and the implications for the research community as we prepare for exascale supercomputer in the next decade. Lastly, I will provide an overview of user programs at the Oak Ridge Leadership Computing Facility with specific information how researchers may apply for allocations of computing resources. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  4. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; Kumar, Jitendra [ORNL; Mills, Richard T. [Argonne National Laboratory; Hoffman, Forrest M. [ORNL; Sripathi, Vamsi [Intel Corporation; Hargrove, William Walter [United States Department of Agriculture (USDA), United States Forest Service (USFS)

    2017-09-01

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like the Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.

  5. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    Science.gov (United States)

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  6. Lipoproteínas: metabolismo y lipoproteínas aterogénicas

    OpenAIRE

    Carlos Carvajal

    2014-01-01

    Los lípidos viajan en sangre en diferentes partículas conteniendo lípidos y proteínas llamadas lipoproteínas. Hay cuatro clases de lipoproteínas en sangre: quilomicrones, VLDL, LDL y HDL. Los quilomicrones transportan triglicéridos (TAG) a tejidos vitales (corazón, musculo esquelético y tejido adiposo). El hígado secreta VLDL que redistribuye TAG al tejido adiposo, corazón y músculo esquelético. LDL transporta colesterol hacia las células y HDL remueve colesterol de las células de vuelta al h...

  7. A pedagogia nas malhas de discursos legais

    OpenAIRE

    Jociane Rosa de Macedo Costa

    2002-01-01

    Esta dissertação se ocupa de discursos da legislação educacional brasileira e documentos correlatos de uma formação particular (em que aconteceram mudanças significativas na sociedade e na cultura)  a pedagogia. Seu objetivo é mostrar como esses discursos, ao prescreverem sobre a formação da pedagoga, produzem uma pedagogia que se constitui como prática de governo. Trata-se de uma pedagogia específica, fabricada nas malhas dos discursos legais e colocada a serviço da nação para a produção de...

  8. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  9. Large scale simulations of lattice QCD thermodynamics on Columbia Parallel Supercomputers

    International Nuclear Information System (INIS)

    Ohta, Shigemi

    1989-01-01

    The Columbia Parallel Supercomputer project aims at the construction of a parallel processing, multi-gigaflop computer optimized for numerical simulations of lattice QCD. The project has three stages; 16-node, 1/4GF machine completed in April 1985, 64-node, 1GF machine completed in August 1987, and 256-node, 16GF machine now under construction. The machines all share a common architecture; a two dimensional torus formed from a rectangular array of N 1 x N 2 independent and identical processors. A processor is capable of operating in a multi-instruction multi-data mode, except for periods of synchronous interprocessor communication with its four nearest neighbors. Here the thermodynamics simulations on the two working machines are reported. (orig./HSI)

  10. Use of QUADRICS supercomputer as embedded simulator in emergency management systems

    International Nuclear Information System (INIS)

    Bove, R.; Di Costanzo, G.; Ziparo, A.

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system

  11. Reactive flow simulations in complex geometries with high-performance supercomputing

    International Nuclear Information System (INIS)

    Rehm, W.; Gerndt, M.; Jahn, W.; Vogelsang, R.; Binninger, B.; Herrmann, M.; Olivier, H.; Weber, M.

    2000-01-01

    In this paper, we report on a modern field code cluster consisting of state-of-the-art reactive Navier-Stokes- and reactive Euler solvers that has been developed on vector- and parallel supercomputers at the research center Juelich. This field code cluster is used for hydrogen safety analyses of technical systems, for example, in the field of nuclear reactor safety and conventional hydrogen demonstration plants with fuel cells. Emphasis is put on the assessment of combustion loads, which could result from slow, fast or rapid flames, including transition from deflagration to detonation. As a sample of proof tests, the special tools have been tested for specific tasks, based on the comparison of experimental and numerical results, which are in reasonable agreement. (author)

  12. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    Science.gov (United States)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  13. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Science.gov (United States)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  14. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Directory of Open Access Journals (Sweden)

    DeTar Carleton

    2018-01-01

    Full Text Available With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  15. Solving sparse linear least squares problems on some supercomputers by using large dense blocks

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Ostromsky, T; Sameh, A

    1997-01-01

    technique is preferable to sparse matrix technique when the matrices are not large, because the high computational speed compensates fully the disadvantages of using more arithmetic operations and more storage. For very large matrices the computations must be organized as a sequence of tasks in each......Efficient subroutines for dense matrix computations have recently been developed and are available on many high-speed computers. On some computers the speed of many dense matrix operations is near to the peak-performance. For sparse matrices storage and operations can be saved by operating only...... and storing only nonzero elements. However, the price is a great degradation of the speed of computations on supercomputers (due to the use of indirect addresses, to the need to insert new nonzeros in the sparse storage scheme, to the lack of data locality, etc.). On many high-speed computers a dense matrix...

  16. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Directory of Open Access Journals (Sweden)

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  17. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  18. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    International Nuclear Information System (INIS)

    Delbecq, J.M.; Banner, D.

    2003-01-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  19. High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers

    Science.gov (United States)

    Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.

    2017-12-01

    The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and

  20. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  1. Estado e controle nas prisões

    OpenAIRE

    Batista, Analía Soria

    2009-01-01

    Este artigo analisa o problema da produção do controle e da ordem em prisões brasileiras, utilizando as perspectivas histórica e sociológica, e levanta a hipóteses de que, no Brasil, convivem duas modalidades de construção da ordem e do controle nas prisões. Uma delas, minoritária, baseia-se na prerrogativa do Estado na gestão do dia a dia prisional. A outra é relativa à negociação da pacificação do presídio entre o Estado e as lideranças dos presos. Embora, no primeiro caso, a prerrogativa d...

  2. Advances in software science and technology

    CERN Document Server

    Hikita, Teruo; Kakuda, Hiroyasu

    1993-01-01

    Advances in Software Science and Technology, Volume 4 provides information pertinent to the advancement of the science and technology of computer software. This book discusses the various applications for computer systems.Organized into two parts encompassing 10 chapters, this volume begins with an overview of the historical survey of programming languages for vector/parallel computers in Japan and describes compiling methods for supercomputers in Japan. This text then explains the model of a Japanese software factory, which is presented by the logical configuration that has been satisfied by

  3. Regulatory perspective on NAS recommendations for Yucca Mountain standards

    International Nuclear Information System (INIS)

    Brocoum, S.J.; Nesbit, S.P.; Duguid, J.A.; Lugo, M.A.; Krishna, P.M.

    1996-01-01

    This paper provides a regulatory perspective from the viewpoint of the potential licensee, the US Department of Energy (DOE), on the National Academy of Sciences (NAS) report on Yucca Mountain standards published in August 1995. The DOE agrees with some aspects of the NAS report; however, the DOE has serious concerns with the ability to implement some of the recommendations in a reasonable manner

  4. SUSTENTABILIDADE NAS CONSTRUÇÕES

    Directory of Open Access Journals (Sweden)

    Gabriela Siqueira Manhães

    2014-11-01

    Full Text Available O processo da área da construção civil é bastante heterogêneo, contemplando diferentes âmbitos de organização produtiva e de formas de comercialização de seus produtos finais, as construções. Nas construções, a complexidade e a indispensabilidade de planejamento e gerenciamento são agravadas pela crescente busca do mercado por maior qualidade em seu desenvolvimento e melhor desempenho do produto final. Isso pode não direcionar a comunicação entre os agentes envolvidos e racionalizar a construção e a edificação, mas também mostrar alternativas inteligentes e sustentáveis que respondam à necessidade de minimização dos impactos ambientais. A discussão que envolve os conceitos de construções inteligentes e sustentáveis, apesar de ser vista no meio acadêmico, parece ainda não encerrada e engloba os diferentes níveis de organização do indivíduo e dasociedade. O presente estudo tem como objetivo discutir o uso desses conceitos no processo construtivo e em seu resultado final, com a finalidade de identificar possíveis relações entre os mesmos e as suas contribuições no contexto da sustentabilidade da construção civil. Observou-se que as inovações tecnológicas dispostas nas várias etapas do processo (construção até o produto final, geraram soluções sustentáveis que deixam sua contribuição para amenizar os impactos no meio ambiente. O trabalho contribui para uma reflexão sobre conceitos de sustentabilidade dentro de uma visão mais integral para a arquitetura, abordando o processo e o produto da produção da arquitetura.

  5. De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H

    2006-09-04

    We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework, high-end chemically reactive and non-reactive molecular dynamics (MD) simulations explore a wide solution space to discover microscopic mechanisms that govern macroscopic material properties, into which highly accurate quantum mechanical (QM) simulations are embedded to validate the discovered mechanisms and quantify the uncertainty of the solution. The framework includes an embedded divide-and-conquer (EDC) algorithmic framework for the design of linear-scaling simulation algorithms with minimal bandwidth complexity and tight error control. The EDC framework also enables adaptive hierarchical simulation with automated model transitioning assisted by graph-based event tracking. A tunable hierarchical cellular decomposition parallelization framework then maps the O(N) EDC algorithms onto Petaflops computers, while achieving performance tunability through a hierarchy of parameterized cell data/computation structures, as well as its implementation using hybrid Grid remote procedure call + message passing + threads programming. High-end computing platforms such as IBM BlueGene/L, SGI Altix 3000 and the NSF TeraGrid provide an excellent test grounds for the framework. On these platforms, we have achieved unprecedented scales of quantum-mechanically accurate and well validated, chemically reactive atomistic simulations--1.06 billion-atom fast reactive force-field MD and 11.8 million-atom (1.04 trillion grid points) quantum-mechanical MD in the framework of the EDC density functional theory on adaptive multigrids--in addition to 134 billion-atom non-reactive space-time multiresolution MD, with the parallel efficiency as high as 0.998 on 65,536 dual-processor BlueGene/L nodes. We have also achieved an automated execution of hierarchical QM

  6. Research center Juelich to install Germany's most powerful supercomputer new IBM System for science and research will achieve 5.8 trillion computations per second

    CERN Multimedia

    2002-01-01

    "The Research Center Juelich, Germany, and IBM today announced that they have signed a contract for the delivery and installation of a new IBM supercomputer at the Central Institute for Applied Mathematics" (1/2 page).

  7. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    Science.gov (United States)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  8. The BlueGene/L Supercomputer and Quantum ChromoDynamics

    International Nuclear Information System (INIS)

    Vranas, P; Soltz, R

    2006-01-01

    In summary our update contains: (1) Perfect speedup sustaining 19.3% of peak for the Wilson D D-slash Dirac operator. (2) Measurements of the full Conjugate Gradient (CG) inverter that inverts the Dirac operator. The CG inverter contains two global sums over the entire machine. Nevertheless, our measurements retain perfect speedup scaling demonstrating the robustness of our methods. (3) We ran on the largest BG/L system, the LLNL 64 rack BG/L supercomputer, and obtained a sustained speed of 59.1 TFlops. Furthermore, the speedup scaling of the Dirac operator and of the CG inverter are perfect all the way up to the full size of the machine, 131,072 cores (please see Figure II). The local lattice is rather small (4 x 4 x 4 x 16) while the total lattice has been a lattice QCD vision for thermodynamic studies (a total of 128 x 128 x 256 x 32 lattice sites). This speed is about five times larger compared to the speed we quoted in our submission. As we have pointed out in our paper QCD is notoriously sensitive to network and memory latencies, has a relatively high communication to computation ratio which can not be overlapped in BGL in virtual node mode, and as an application is in a class of its own. The above results are thrilling to us and a 30 year long dream for lattice QCD

  9. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Swaminarayan, Sriram [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Kadau, Kai [Los Alamos National Laboratory; Fossum, Gordon C [IBM CORPORATION

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  10. A user-friendly web portal for T-Coffee on supercomputers

    Directory of Open Access Journals (Sweden)

    Koetsier Jos

    2011-05-01

    Full Text Available Abstract Background Parallel T-Coffee (PTC was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  11. A Parallel Supercomputer Implementation of a Biological Inspired Neural Network and its use for Pattern Recognition

    International Nuclear Information System (INIS)

    De Ladurantaye, Vincent; Lavoie, Jean; Bergeron, Jocelyn; Parenteau, Maxime; Lu Huizhong; Pichevar, Ramin; Rouat, Jean

    2012-01-01

    A parallel implementation of a large spiking neural network is proposed and evaluated. The neural network implements the binding by synchrony process using the Oscillatory Dynamic Link Matcher (ODLM). Scalability, speed and performance are compared for 2 implementations: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) running on clusters of multicore supercomputers and NVIDIA graphical processing units respectively. A global spiking list that represents at each instant the state of the neural network is described. This list indexes each neuron that fires during the current simulation time so that the influence of their spikes are simultaneously processed on all computing units. Our implementation shows a good scalability for very large networks. A complex and large spiking neural network has been implemented in parallel with success, thus paving the road towards real-life applications based on networks of spiking neurons. MPI offers a better scalability than CUDA, while the CUDA implementation on a GeForce GTX 285 gives the best cost to performance ratio. When running the neural network on the GTX 285, the processing speed is comparable to the MPI implementation on RQCHP's Mammouth parallel with 64 notes (128 cores).

  12. Modeling radiative transport in ICF plasmas on an IBM SP2 supercomputer

    International Nuclear Information System (INIS)

    Johansen, J.A.; MacFarlane, J.J.; Moses, G.A.

    1995-01-01

    At the University of Wisconsin-Madison the authors have integrated a collisional-radiative-equilibrium model into their CONRAD radiation-hydrodynamics code. This integrated package allows them to accurately simulate the transport processes involved in ICF plasmas; including the important effects of self-absorption of line-radiation. However, as they increase the amount of atomic structure utilized in their transport models, the computational demands increase nonlinearly. In an attempt to meet this increased computational demand, they have recently embarked on a mission to parallelize the CONRAD program. The parallel CONRAD development is being performed on an IBM SP2 supercomputer. The parallelism is based on a message passing paradigm, and is being implemented using PVM. At the present time they have determined that approximately 70% of the sequential program can be executed in parallel. Accordingly, they expect that the parallel version will yield a speedup on the order of three times that of the sequential version. This translates into only 10 hours of execution time for the parallel version, whereas the sequential version required 30 hours

  13. Portable implementation model for CFD simulations. Application to hybrid CPU/GPU supercomputers

    Science.gov (United States)

    Oyarzun, Guillermo; Borrell, Ricard; Gorobets, Andrey; Oliva, Assensi

    2017-10-01

    Nowadays, high performance computing (HPC) systems experience a disruptive moment with a variety of novel architectures and frameworks, without any clarity of which one is going to prevail. In this context, the portability of codes across different architectures is of major importance. This paper presents a portable implementation model based on an algebraic operational approach for direct numerical simulation (DNS) and large eddy simulation (LES) of incompressible turbulent flows using unstructured hybrid meshes. The strategy proposed consists in representing the whole time-integration algorithm using only three basic algebraic operations: sparse matrix-vector product, a linear combination of vectors and dot product. The main idea is based on decomposing the nonlinear operators into a concatenation of two SpMV operations. This provides high modularity and portability. An exhaustive analysis of the proposed implementation for hybrid CPU/GPU supercomputers has been conducted with tests using up to 128 GPUs. The main objective consists in understanding the challenges of implementing CFD codes on new architectures.

  14. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Helland, B. [Ames Lab., IA (United States); Summers, B.G. [Oak Ridge National Lab., TN (United States)

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. The data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.

  15. Visualization at supercomputing centers: the tale of little big iron and the three skinny guys.

    Science.gov (United States)

    Bethel, E W; van Rosendale, J; Southard, D; Gaither, K; Childs, H; Brugger, E; Ahern, S

    2011-01-01

    Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources-the "Big Iron." Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be-that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?

  16. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan; Mills, Richard T.

    2012-04-18

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors per realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.

  17. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  18. The Fermilab Advanced Computer Program multi-array processor system (ACPMAPS): A site oriented supercomputer for theoretical physics

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    The ACP Multi-Array Processor System (ACPMAPS) is a highly cost effective, local memory parallel computer designed for floating point intensive grid based problems. The processing nodes of the system are single board array processors based on the FORTRAN and C programmable Weitek XL chip set. The nodes are connected by a network of very high bandwidth 16 port crossbar switches. The architecture is designed to achieve the highest possible cost effectiveness while maintaining a high level of programmability. The primary application of the machine at Fermilab will be lattice gauge theory. The hardware is supported by a transparent site oriented software system called CANOPY which shields theorist users from the underlying node structure. 4 refs., 2 figs

  19. CRM NAS ORGANIZAÇÕES

    Directory of Open Access Journals (Sweden)

    Leonardo Arruda Ribas

    2005-06-01

    Full Text Available Frente às forças impostas pela globalização, Internet e evolução tecnológica, aliadas a uma era de descontinuidade, tem-se como resultado um novo tipo de consumidor, mais questionador e exigente, que as organizações têm de conquistar, de forma a atingir sua fidelização. Várias são as empresas que trabalham para conhecer melhor os seus clientes, operando mudanças das culturais organizacionais, que passam a ter o foco nas necessidades do seu público. Nesse contexto, muitas organizações implementam o CRM (Customer relationship management, objetivando maior integração com os clientes, através da coleta de informações sobre as atividades e necessidades destes, para entender o seu comportamento, obter sua satisfação e, conseqüentemente, sua retenção. Este trabalho pretende esclarecer a experiência do CRM e de sua implantação no âmbito internacional e nacional. Verificou-se forte tendência não apenas mundial, mas também das organizações brasileiras, à implementação do CRM. Uma das exigências fundamentais para sua implementação de sucesso é o completo entendimento dessa filosofia de trabalho e sua absorção pela cultura da organização. Outro aspecto relevante é a contribuição do suporte eletrônico (softwares na integração entre as vendas, o marketing e as funções de apoio ao cliente.

  20. Estado e controle nas prisões

    Directory of Open Access Journals (Sweden)

    Analía Soria Batista

    Full Text Available Este artigo analisa o problema da produção do controle e da ordem em prisões brasileiras, utilizando as perspectivas histórica e sociológica, e levanta a hipóteses de que, no Brasil, convivem duas modalidades de construção da ordem e do controle nas prisões. Uma delas, minoritária, baseia-se na prerrogativa do Estado na gestão do dia a dia prisional. A outra é relativa à negociação da pacificação do presídio entre o Estado e as lideranças dos presos. Embora, no primeiro caso, a prerrogativa do Estado possa ser vinculada às condições institucionais adequadas e, no segundo (negociação entre o estado e as lideranças dos presos às condições precárias dos presídios, como superlotação, número reduzido de agentes penitenciários, entre outros, a análise apontou que ambas as modalidades traduzem formas de relacionamentos e interações sociais historicamente produzidas entre o Estado e a sociedade, que remetem à fundação da República, recriadas através do habitus dos atores sociais, não se restringindo exclusivamente ao espaço social das prisões.

  1. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    Science.gov (United States)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are

  2. Parallel simulation of tsunami inundation on a large-scale supercomputer

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the

  3. Advanced technology composite aircraft structures

    Science.gov (United States)

    Ilcewicz, Larry B.; Walker, Thomas H.

    1991-01-01

    Work performed during the 25th month on NAS1-18889, Advanced Technology Composite Aircraft Structures, is summarized. The main objective of this program is to develop an integrated technology and demonstrate a confidence level that permits the cost- and weight-effective use of advanced composite materials in primary structures of future aircraft with the emphasis on pressurized fuselages. The period from 1-31 May 1991 is covered.

  4. Advancements in simulations of lattice quantum chromodynamics

    International Nuclear Information System (INIS)

    Lippert, T.

    2008-01-01

    An introduction to lattice QCD with emphasis on advanced fermion formulations and their simulation is given. In particular, overlap fermions will be presented, a quite novel fermionic discretization scheme that is able to exactly preserve chiral symmetry on the lattice. I will discuss efficiencies of state-of-the-art algorithms on highly scalable supercomputers and I will show that, due to many algorithmic improvements, overlap simulations will soon become feasible for realistic physical lattice sizes. Finally I am going to sketch the status of some current large scale lattice QCD simulations. (author)

  5. Scalable geocomputation: evolving an environmental model building platform from single-core to supercomputers

    Science.gov (United States)

    Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek

    2017-04-01

    There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework (http://www.pcraster.eu) providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using

  6. Simulation of x-rays in refractive structure by the Monte Carlo method using the supercomputer SKIF

    International Nuclear Information System (INIS)

    Yaskevich, Yu.R.; Kravchenko, O.I.; Soroka, I.I.; Chembrovskij, A.G.; Kolesnik, A.S.; Serikova, N.V.; Petrov, P.V.; Kol'chevskij, N.N.

    2013-01-01

    Software 'Xray-SKIF' for the simulation of the X-rays in refractive structures by the Monte-Carlo method using the supercomputer SKIF BSU are developed. The program generates a large number of rays propagated from a source to the refractive structure. The ray trajectory under assumption of geometrical optics is calculated. Absorption is calculated for each ray inside of refractive structure. Dynamic arrays are used for results of calculation rays parameters, its restore the X-ray field distributions very fast at different position of detector. It was found that increasing the number of processors leads to proportional decreasing of calculation time: simulation of 10 8 X-rays using supercomputer with the number of processors from 1 to 30 run-times equal 3 hours and 6 minutes, respectively. 10 9 X-rays are calculated by software 'Xray-SKIF' which allows to reconstruct the X-ray field after refractive structure with a special resolution of 1 micron. (authors)

  7. Nas dobras do legal e do ilegal: Ilegalismos e jogos de poder nas tramas da cidade

    Directory of Open Access Journals (Sweden)

    Vera da Silva Telles

    2009-07-01

    Full Text Available Este artigo discute as relações redefinidas entre o informal, o ilegal e o ilícito que acompanham as formas contemporâneas de produção e circulação de riquezas. Interroga-se o modo como essas redefinições afetam ordenamentos sociais e jogos de poder em três categorias encontradas na cidade de São Paulo: os ilegalismos difusos inscritos nas “mobilidades laterais” do trabalhador urbano; os ilegalismos que passam pelos circuitos do comercio informal no centro nervoso da economia urbana da cidade; e a periferia paulista onde todos esses fios se enredam em torno do varejo da droga. This article discusses the redefined relationships between the informal, the illegal and the illicit which follow contemporary forms of production and circulation of wealth. The paper explores how these redefinitions affect social orders and power struggles in relation to three situations in Sao Paulo: the illegalisms diffused from “lateral mobility” of the urban worker; the illegalisms of informal commerce in the nerve centre of the urban economy; and the poor São Paulo outskirts where all these strands intertwine around drug dealing.

  8. Coherent 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an Optimal Supercomputer Optical Switch Fabric

    DEFF Research Database (Denmark)

    Karinou, Fotini; Borkowski, Robert; Zibar, Darko

    2013-01-01

    We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates.......We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates....

  9. UAS Integration in the NAS: Detect and Avoid

    Science.gov (United States)

    Shively, Jay

    2018-01-01

    This presentation will cover the structure of the unmanned aircraft systems (UAS) integration into the national airspace system (NAS) project (UAS-NAS Project). The talk also details the motivation of the project to help develop standards for a detect-and-avoid (DAA) system, which is required in order to comply with requirements in manned aviation to see-and-avoid other traffic so as to maintain well clear. The presentation covers accomplishments reached by the project in Phase 1 of the research, and touches on the work to be done in Phase 2. The discussion ends with examples of the display work developed as a result of the Phase 1 research.

  10. UAS-NAS Flight Test Series 3: Test Environment Report

    Science.gov (United States)

    Hoang, Ty; Murphy, Jim; Otto, Neil

    2016-01-01

    The desire and ability to fly Unmanned Aircraft Systems (UAS) in the National Airspace System (NAS) is of increasing urgency. The application of unmanned aircraft to perform national security, defense, scientific, and emergency management are driving the critical need for less restrictive access by UAS to the NAS. UAS represent a new capability that will provide a variety of services in the government (public) and commercial (civil) aviation sectors. The growth of this potential industry has not yet been realized due to the lack of a common understanding of what is required to safely operate UAS in the NAS. NASA's UAS Integration in the NAS Project is conducting research in the areas of Separation Assurance/Sense and Avoid Interoperability (SSI), Human Systems Integration (HSI), and Communications (Comm), and Certification to support reducing the barriers of UAS access to the NAS. This research is broken into two research themes namely, UAS Integration and Test Infrastructure. UAS Integration focuses on airspace integration procedures and performance standards to enable UAS integration in the air transportation system, covering Detect and Avoid (DAA) performance standards, command and control performance standards, and human systems integration. The focus of Test Infrastructure is to enable development and validation of airspace integration procedures and performance standards, including integrated test and evaluation. In support of the integrated test and evaluation efforts, the Project will develop an adaptable, scalable, and schedulable relevant test environment capable of evaluating concepts and technologies for unmanned aircraft systems to safely operate in the NAS. To accomplish this task, the Project is conducting a series of human-in-the-loop (HITL) and flight test activities that integrate key concepts, technologies and/or procedures in a relevant air traffic environment. Each of the integrated events will build on the technical achievements, fidelity, and

  11. Upgrades to the Probabilistic NAS Platform Air Traffic Simulation Software

    Science.gov (United States)

    Hunter, George; Boisvert, Benjamin

    2013-01-01

    This document is the final report for the project entitled "Upgrades to the Probabilistic NAS Platform Air Traffic Simulation Software." This report consists of 17 sections which document the results of the several subtasks of this effort. The Probabilistic NAS Platform (PNP) is an air operations simulation platform developed and maintained by the Saab Sensis Corporation. The improvements made to the PNP simulation include the following: an airborne distributed separation assurance capability, a required time of arrival assignment and conformance capability, and a tactical and strategic weather avoidance capability.

  12. The Erasmus Computing Grid - Building a Super-Computer for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2007-01-01

    textabstractToday advances in scientific research as well as clinical diagnostics and treatment are inevitably connected with information solutions concerning computation power and information storage. The needs for information technology are enormous and are in many cases the limiting

  13. MaMiCo: Transient multi-instance molecular-continuum flow simulation on supercomputers

    Science.gov (United States)

    Neumann, Philipp; Bian, Xin

    2017-11-01

    We present extensions of the macro-micro-coupling tool MaMiCo, which was designed to couple continuum fluid dynamics solvers with discrete particle dynamics. To enable local extraction of smooth flow field quantities especially on rather short time scales, sampling over an ensemble of molecular dynamics simulations is introduced. We provide details on these extensions including the transient coupling algorithm, open boundary forcing, and multi-instance sampling. Furthermore, we validate the coupling in Couette flow using different particle simulation software packages and particle models, i.e. molecular dynamics and dissipative particle dynamics. Finally, we demonstrate the parallel scalability of the molecular-continuum simulations by using up to 65 536 compute cores of the supercomputer Shaheen II located at KAUST. Program Files doi:http://dx.doi.org/10.17632/w7rgdrhb85.1 Licensing provisions: BSD 3-clause Programming language: C, C++ External routines/libraries: For compiling: SCons, MPI (optional) Subprograms used: ESPResSo, LAMMPS, ls1 mardyn, waLBerla For installation procedures of the MaMiCo interfaces, see the README files in the respective code directories located in coupling/interface/impl. Journal reference of previous version: P. Neumann, H. Flohr, R. Arora, P. Jarmatz, N. Tchipev, H.-J. Bungartz. MaMiCo: Software design for parallel molecular-continuum flow simulations, Computer Physics Communications 200: 324-335, 2016 Does the new version supersede the previous version?: Yes. The functionality of the previous version is completely retained in the new version. Nature of problem: Coupled molecular-continuum simulation for multi-resolution fluid dynamics: parts of the domain are resolved by molecular dynamics or another particle-based solver whereas large parts are covered by a mesh-based CFD solver, e.g. a lattice Boltzmann automaton. Solution method: We couple existing MD and CFD solvers via MaMiCo (macro-micro coupling tool). Data exchange and

  14. Three-dimensional kinetic simulations of whistler turbulence in solar wind on parallel supercomputers

    Science.gov (United States)

    Chang, Ouliang

    The objective of this dissertation is to study the physics of whistler turbulence evolution and its role in energy transport and dissipation in the solar wind plasmas through computational and theoretical investigations. This dissertation presents the first fully three-dimensional (3D) particle-in-cell (PIC) simulations of whistler turbulence forward cascade in a homogeneous, collisionless plasma with a uniform background magnetic field B o, and the first 3D PIC simulation of whistler turbulence with both forward and inverse cascades. Such computationally demanding research is made possible through the use of massively parallel, high performance electromagnetic PIC simulations on state-of-the-art supercomputers. Simulations are carried out to study characteristic properties of whistler turbulence under variable solar wind fluctuation amplitude (epsilon e) and electron beta (betae), relative contributions to energy dissipation and electron heating in whistler turbulence from the quasilinear scenario and the intermittency scenario, and whistler turbulence preferential cascading direction and wavevector anisotropy. The 3D simulations of whistler turbulence exhibit a forward cascade of fluctuations into broadband, anisotropic, turbulent spectrum at shorter wavelengths with wavevectors preferentially quasi-perpendicular to B o. The overall electron heating yields T ∥ > T⊥ for all epsilone and betae values, indicating the primary linear wave-particle interaction is Landau damping. But linear wave-particle interactions play a minor role in shaping the wavevector spectrum, whereas nonlinear wave-wave interactions are overall stronger and faster processes, and ultimately determine the wavevector anisotropy. Simulated magnetic energy spectra as function of wavenumber show a spectral break to steeper slopes, which scales as k⊥lambda e ≃ 1 independent of betae values, where lambdae is electron inertial length, qualitatively similar to solar wind observations. Specific

  15. Supercomputer methods for the solution of fundamental problems of particle physics

    International Nuclear Information System (INIS)

    Moriarty, K.J.M.; Rebbi, C.

    1990-01-01

    The authors present motivation and methods for computer investigations in particle theory. They illustrate the computational formulation of quantum chromodynamics and selected application to the calculation of hadronic properties. They discuss possible extensions of the methods developed for particle theory to different areas of applications, such as cosmology and solid-state physics, that share common methods. Because of the commonality of methodology, advances in one area stimulate advances in other ares. They also outline future plans of research

  16. Emprego dos gangliosidos do cortex cerebral nas neuropatias perifericas

    Directory of Open Access Journals (Sweden)

    James Pitagoras De Mattos

    1981-12-01

    Full Text Available Os autores registram a experiência pessoal com o emprego de gangliosídios do cortex cerebral nas neuropatias periféricas. O ensaio clínico e eletromiográfico revelou-se eficaz em 30 dos 40 casos tratados. Enfatizam os melhores resultados em casos de paralisias faciais periféricas.

  17. Depósito legal nas bibliotecas portuguesas

    OpenAIRE

    Fiolhais, Carlos

    2007-01-01

    O modelo de depósito legal nas bibliotecas portuguesas é questionado face às dificuldades financeiras e de vária ordem com que as mesmas se deparam, defende-se uma racionalização do depósito legal e uma tomada de posição pela Biblioteca Nacional entidade gestora do sistema.

  18. Car2x with software defined networks, network functions virtualization and supercomputers technical and scientific preparations for the Amsterdam Arena telecoms fieldlab

    NARCIS (Netherlands)

    Meijer R.J.; Cushing R.; De Laat C.; Jackson P.; Klous S.; Koning R.; Makkes M.X.; Meerwijk A.

    2015-01-01

    In the invited talk 'Car2x with SDN, NFV and supercomputers' we report about how our past work with SDN [1, 2] allows the design of a smart mobility fieldlab in the huge parking lot the Amsterdam Arena. We explain how we can engineer and test software that handle the complex conditions of the Car2X

  19. MEGADOCK 4.0: an ultra-high-performance protein-protein docking software for heterogeneous supercomputers.

    Science.gov (United States)

    Ohue, Masahito; Shimoda, Takehiro; Suzuki, Shuji; Matsuzaki, Yuri; Ishida, Takashi; Akiyama, Yutaka

    2014-11-15

    The application of protein-protein docking in large-scale interactome analysis is a major challenge in structural bioinformatics and requires huge computing resources. In this work, we present MEGADOCK 4.0, an FFT-based docking software that makes extensive use of recent heterogeneous supercomputers and shows powerful, scalable performance of >97% strong scaling. MEGADOCK 4.0 is written in C++ with OpenMPI and NVIDIA CUDA 5.0 (or later) and is freely available to all academic and non-profit users at: http://www.bi.cs.titech.ac.jp/megadock. akiyama@cs.titech.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  20. A criticality safety analysis code using a vectorized Monte Carlo method on the HITAC S-810 supercomputer

    International Nuclear Information System (INIS)

    Morimoto, Y.; Maruyama, H.

    1987-01-01

    A vectorized Monte Carlo criticality safety analysis code has been developed on the vector supercomputer HITAC S-810. In this code, a multi-particle tracking algorithm was adopted for effective utilization of the vector processor. A flight analysis with pseudo-scattering was developed to reduce the computational time needed for flight analysis, which represents the bulk of computational time. This new algorithm realized a speed-up of factor 1.5 over the conventional flight analysis. The code also adopted the multigroup cross section constants library of the Bodarenko type with 190 groups, with 132 groups being for fast and epithermal regions and 58 groups being for the thermal region. Evaluation work showed that this code reproduce the experimental results to an accuracy of about 1 % for the effective neutron multiplication factor. (author)

  1. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M.; Banner, D. [Electricite de France (EDF)- R and D Division, 92 - Clamart (France)

    2003-07-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  2. Evaluating the networking characteristics of the Cray XC-40 Intel Knights Landing-based Cori supercomputer at NERSC

    Energy Technology Data Exchange (ETDEWEB)

    Doerfler, Douglas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Austin, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cook, Brandon [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kandalla, Krishna [Cray Inc, Bloomington, MN (United States); Mendygral, Peter [Cray Inc, Bloomington, MN (United States)

    2017-09-12

    There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL, such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.

  3. Performance Evaluation of an Intel Haswell- and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

    Science.gov (United States)

    Saini, Subhash; Hood, Robert T.; Chang, Johnny; Baron, John

    2016-01-01

    We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5- 2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.

  4. The Erasmus Computing Grid – Building a Super-Computer for Free

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); A. Abuseiris (Anis); R.M. de Graaf (Rob); M. Lesnussa (Michael); F.G. Grosveld (Frank)

    2011-01-01

    textabstractToday advances in scientific research as well as clinical diagnostics and treatment are inevitably connected with information solutions concerning computation power and information storage. The needs for information technology are enormous and are in many cases the limiting factor for

  5. Applications Performance on NAS Intel Paragon XP/S - 15#

    Science.gov (United States)

    Saini, Subhash; Simon, Horst D.; Copper, D. M. (Technical Monitor)

    1994-01-01

    The Numerical Aerodynamic Simulation (NAS) Systems Division received an Intel Touchstone Sigma prototype model Paragon XP/S- 15 in February, 1993. The i860 XP microprocessor with an integrated floating point unit and operating in dual -instruction mode gives peak performance of 75 million floating point operations (NIFLOPS) per second for 64 bit floating point arithmetic. It is used in the Paragon XP/S-15 which has been installed at NAS, NASA Ames Research Center. The NAS Paragon has 208 nodes and its peak performance is 15.6 GFLOPS. Here, we will report on early experience using the Paragon XP/S- 15. We have tested its performance using both kernels and applications of interest to NAS. We have measured the performance of BLAS 1, 2 and 3 both assembly-coded and Fortran coded on NAS Paragon XP/S- 15. Furthermore, we have investigated the performance of a single node one-dimensional FFT, a distributed two-dimensional FFT and a distributed three-dimensional FFT Finally, we measured the performance of NAS Parallel Benchmarks (NPB) on the Paragon and compare it with the performance obtained on other highly parallel machines, such as CM-5, CRAY T3D, IBM SP I, etc. In particular, we investigated the following issues, which can strongly affect the performance of the Paragon: a. Impact of the operating system: Intel currently uses as a default an operating system OSF/1 AD from the Open Software Foundation. The paging of Open Software Foundation (OSF) server at 22 MB to make more memory available for the application degrades the performance. We found that when the limit of 26 NIB per node out of 32 MB available is reached, the application is paged out of main memory using virtual memory. When the application starts paging, the performance is considerably reduced. We found that dynamic memory allocation can help applications performance under certain circumstances. b. Impact of data cache on the i860/XP: We measured the performance of the BLAS both assembly coded and Fortran

  6. Operational implications and proposed infrastructure changes for NAS integration of remotely piloted aircraft (RPA)

    Science.gov (United States)

    2014-12-01

    The intent of this report is to provide (1) an initial assessment of National Airspace System (NAS) infrastructure affected by continuing development and deployment of unmanned aircraft systems into the NAS, and (2) a description of process challenge...

  7. Unmanned Aircraft Systems (UAS) Integration in the National Airspace System (NAS) Project FY17 Annual Review

    Science.gov (United States)

    Sakahara, Robert; Hackenberg, Davis; Johnson, William

    2017-01-01

    This presentation was presented to the Integrated Aviation Systems Program at the FY17 Annual Review of the UAS-NAS project. The presentation captures the overview of the work completed by the UAS-NAS project and its subprojects.

  8. Auto-Suggest Capability via Machine Learning in SMART NAS, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We build machine learning capabilities that enables the Shadow Mode Assessment using Realistic Technologies for the NAS (SMART NAS) system to synthesize, optimize,...

  9. Música: uma imgagem sonora nas comunidades eclesiais de base

    OpenAIRE

    Roberto Barroso da Rocha

    2012-01-01

    Esta dissertação tem o propósito de analisar a função social da música nas CEBs (Comunidades Eclesiais de Base), que possui bases bíblicas e está presente nos dias atuais. A primeira parte trata da função social da música na Bíblia chegando até os dias atuais com uma breve narrativa da história da música ocidental. A segunda parte aborda a música nas CEBs e a Teologia da Libertação como parte importante do contexto musical, onde os ideais da Teologia de Libertação são divulgados pela música; ...

  10. NASA UAS Integration into the NAS Project: Human Systems Integration

    Science.gov (United States)

    Shively, Jay

    2016-01-01

    This presentation provides an overview of the work the Human Systems Integration (HSI) sub-project has done on detect and avoid (DAA) displays while working on the UAS (Unmanned Aircraft System) Integration into the NAS project. The most recent simulation on DAA interoperability with Traffic Collision Avoidance System (TCAS) is discussed in the most detail. The relationship of the work to the larger UAS community and next steps are also detailed.

  11. Deficiencia combinada de proteínas c y s

    Directory of Open Access Journals (Sweden)

    Yaneth Zamora-González

    Full Text Available Las trombofilias son un grupo de enfermedades que favorecen la formación de trombosis, tanto arteriales como venosas, que han sido asociadas con diferentes complicaciones durante el embarazo, como: aborto recurrente, preclampsia, crecimiento intrauterino retardado y muerte fetal intraútero, entre otras. La deficiencia congénita o adquirida de proteínas de la coagulación, como las proteínas C y S, se asocia con eventos trombóticos antes de los 30 o 40 años. La trombosis venosa profunda es considerada la manifestación clínica más frecuente, aunque también puede verse asociada con enfermedad cerebro vascular, pérdidas recurrentes de embarazos y otros estados isquémicos. En la actualidad, las enfermedades trombóticas constituyen una de las primeras causas de fallecimiento en el mundo; la morbimortalidad anual por trombosis, ya sea arterial o venosa, es de aproximadamente dos millones de personas. Presentamos un caso con antecedentes de pérdidas recurrentes de embarazos y trombosis venosa profunda en miembros inferiores con deficiencia combinada de proteínas C y S.

  12. Programmable lithography engine (ProLE) grid-type supercomputer and its applications

    Science.gov (United States)

    Petersen, John S.; Maslow, Mark J.; Gerold, David J.; Greenway, Robert T.

    2003-06-01

    There are many variables that can affect lithographic dependent device yield. Because of this, it is not enough to make optical proximity corrections (OPC) based on the mask type, wavelength, lens, illumination-type and coherence. Resist chemistry and physics along with substrate, exposure, and all post-exposure processing must be considered too. Only a holistic approach to finding imaging solutions will accelerate yield and maximize performance. Since experiments are too costly in both time and money, accomplishing this takes massive amounts of accurate simulation capability. Our solution is to create a workbench that has a set of advanced user applications that utilize best-in-class simulator engines for solving litho-related DFM problems using distributive computing. Our product, ProLE (Programmable Lithography Engine), is an integrated system that combines Petersen Advanced Lithography Inc."s (PAL"s) proprietary applications and cluster management software wrapped around commercial software engines, along with optional commercial hardware and software. It uses the most rigorous lithography simulation engines to solve deep sub-wavelength imaging problems accurately and at speeds that are several orders of magnitude faster than current methods. Specifically, ProLE uses full vector thin-mask aerial image models or when needed, full across source 3D electromagnetic field simulation to make accurate aerial image predictions along with calibrated resist models;. The ProLE workstation from Petersen Advanced Lithography, Inc., is the first commercial product that makes it possible to do these intensive calculations at a fraction of a time previously available thus significantly reducing time to market for advance technology devices. In this work, ProLE is introduced, through model comparison to show why vector imaging and rigorous resist models work better than other less rigorous models, then some applications of that use our distributive computing solution are shown

  13. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Deng, Xiaogang; Zhang, Lilun [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Fang, Jianbin [Parallel and Distributed Systems Group, Delft University of Technology, Delft 2628CD (Netherlands); Wang, Guangxue; Jiang, Yi [State Key Laboratory of Aerodynamics, P.O. Box 211, Mianyang 621000 (China); Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua [College of Computer Science, National University of Defense Technology, Changsha 410073 (China)

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  14. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    International Nuclear Information System (INIS)

    Xu, Chuanfu; Deng, Xiaogang; Zhang, Lilun; Fang, Jianbin; Wang, Guangxue; Jiang, Yi; Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua

    2014-01-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  15. Connecting the dots, or nuclear data in the age of supercomputing

    International Nuclear Information System (INIS)

    Bauge, E.; Dupuis, M.; Hilaire, S.; Peru, S.; Koning, A.J.; Rochman, D.; Goriely, S.

    2014-01-01

    Recent increases in computing power have allowed for much progress to be made in the field of nuclear data. The advances listed below are each significant, but together bring the potential to completely change our perspective on the nuclear data evaluation process. The use of modern nuclear modeling codes like TALYS and the Monte Carlo sampling of its model parameter space, together with a code system developed at NRG Petten, which automates the production of ENDF-6 formatted files, their processing, and their use in nuclear reactor calculations, constitutes the Total Monte Carlo approach, which directly links physical model parameters with calculated integral observables like k_e_f_f. Together with the Backward-Forward Monte Carlo method for weighting samples according their statistical likelihood, the Total Monte Carlo can be applied to complete isotopic chains in a consistent way, to simultaneously evaluate nuclear data and the associated uncertainties in the continuum region. Another improvement is found in the uses of microscopic models for nuclear reaction calculations. For example, making use of QRPA excited states calculated with the Gogny interaction to solve the long standing question of the origin of the ad hoc 'pseudo-states' that are introduced in evaluated nuclear data files to account for the Livermore pulsed sphere experiments. A third advance consists of the recent optimization of the Gogny D1M effective nuclear interaction, including constraints from experimental nuclear masses at the 'beyond the mean field' level. All these advances are only made possible by the availability of vast resources of computing power, and even greater resources will allow connecting them, going continuously from the parameters of the nuclear interaction to reactor calculations. However, such scheme will surely only be usable for applications if a few fine-tuning 'knobs' are introduced in it. The values of these adjusted parameters will have to be calibrated versus

  16. High electron mobility in Ga(In)NAs films grown by molecular beam epitaxy

    International Nuclear Information System (INIS)

    Miyashita, Naoya; Ahsan, Nazmul; Monirul Islam, Muhammad; Okada, Yoshitaka; Inagaki, Makoto; Yamaguchi, Masafumi

    2012-01-01

    We report the highest mobility values above 2000 cm 2 /Vs in Si doped GaNAs film grown by molecular beam epitaxy. To understand the feature of the origin which limits the electron mobility in GaNAs, temperature dependences of mobility were measured for high mobility GaNAs and referential low mobility GaInNAs. Temperature dependent mobility for high mobility GaNAs is similar to the GaAs case, while that for low mobility GaInNAs shows large decrease in lower temperature region. The electron mobility of high quality GaNAs can be explained by intrinsic limiting factor of random alloy scattering and extrinsic factor of ionized impurity scattering.

  17. Installation of a new Fortran compiler and effective programming method on the vector supercomputer

    International Nuclear Information System (INIS)

    Nemoto, Toshiyuki; Suzuki, Koichiro; Watanabe, Kenji; Machida, Masahiko; Osanai, Seiji; Isobe, Nobuo; Harada, Hiroo; Yokokawa, Mitsuo

    1992-07-01

    The Fortran compiler, version 10 has been replaced with the new one, version 12 (V12) on the Fujitsu Computer system at JAERI since May, 1992. The benchmark test for the performance of the V12 compiler is carried out with 16 representative nuclear codes in advance of the installation of the compiler. The performance of the compiler is achieved by the factor of 1.13 in average. The effect of the enhanced functions of the compiler and the compatibility to the nuclear codes are also examined. The assistant tool for vectorization TOP10EX is developed. In this report, the results of the evaluation of the V12 compiler and the usage of the tools for vectorization are presented. (author)

  18. GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers

    Directory of Open Access Journals (Sweden)

    Mark James Abraham

    2015-09-01

    Full Text Available GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. These work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. The latest best-in-class compressed trajectory storage format is supported.

  19. SOFTWARE FOR SUPERCOMPUTER SKIF “ProLit-lC” and “ProNRS-lC” FOR FOUNDRY AND METALLURGICAL PRODUCTIONS

    Directory of Open Access Journals (Sweden)

    A. N. Chichko

    2008-01-01

    Full Text Available The data of modeling on supercomputer system SKIF of technological process of  molds filling by means of computer system 'ProLIT-lc', and also data of modeling of the steel pouring process by means ofTroNRS-lc'are presented. The influence of number of  processors of  multinuclear computer system SKIF on acceleration and time of  modeling of technological processes, connected with production of castings and slugs, is shown.

  20. Propaganda negativa nas eleições presidenciais brasileiras

    OpenAIRE

    Borba, Felipe

    2015-01-01

    ResumoEste artigo tem como propósito investigar a propaganda negativa nas eleições presidenciais brasileiras. Tema de extrema relevância tendo em vista que a literatura recente vem sugerindo que o tom das campanhas tem consequências importantes para a decisão do voto, a participação política e o nível de informação dos eleitores. Entretanto, a maior parte desses estudos foi realizada para entender a realidade política dos Estados Unidos. No Brasil, apesar do crescente interesse pelos efeitos ...

  1. Estado Confusional Agudo nas Unidades de Cuidados Intensivos

    OpenAIRE

    Santos, L; Alcântara, J

    1996-01-01

    As alterações do comportamento frequentemente observadas em doentes internados nas unidades de cuidados intensivos (UCI), podem ser adequadamente designadas, na maioria das vezes, por estado confusional agudo, o qual se caracteriza por: flutuação do estado de vigília, distúrbio do ciclo vigília-sono, défice de atenção e concentração, desorganização do pensamento, manifestado entre outras formas por discurso incoerente, distúrbios da percepção sob a forma de ilusões e/ou alucinações, desorient...

  2. Pensamento estratégico nas organizações

    OpenAIRE

    Kich, Juliane Ines Di Francesco

    2015-01-01

    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Sócio-Econômico, Programa de Pós-Graduação em Administração, Florianópolis, 2015. A presente tese propõe um modelo que subsidia o desenvolvimento do pensamento estratégico nas organizações. Tem como objetivo maior responder a seguinte pergunta de pesquisa: Quais são os atributos que formam o conceito do pensamento estratégico, e quais são os elementos organizacionais que desenvolvem tais atributos nos membros de uma organiza...

  3. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    Science.gov (United States)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  4. The ASCI Network for SC '99: A Step on the Path to a 100 Gigabit Per Second Supercomputing Network

    Energy Technology Data Exchange (ETDEWEB)

    PRATT,THOMAS J.; TARMAN,THOMAS D.; MARTINEZ,LUIS M.; MILLER,MARC M.; ADAMS,ROGER L.; CHEN,HELEN Y.; BRANDT,JAMES M.; WYCKOFF,PETER S.

    2000-07-24

    This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within the community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.

  5. Combining density functional theory calculations, supercomputing, and data-driven methods to design new materials (Conference Presentation)

    Science.gov (United States)

    Jain, Anubhav

    2017-04-01

    Density functional theory (DFT) simulations solve for the electronic structure of materials starting from the Schrödinger equation. Many case studies have now demonstrated that researchers can often use DFT to design new compounds in the computer (e.g., for batteries, catalysts, and hydrogen storage) before synthesis and characterization in the lab. In this talk, I will focus on how DFT calculations can be executed on large supercomputing resources in order to generate very large data sets on new materials for functional applications. First, I will briefly describe the Materials Project, an effort at LBNL that has virtually characterized over 60,000 materials using DFT and has shared the results with over 17,000 registered users. Next, I will talk about how such data can help discover new materials, describing how preliminary computational screening led to the identification and confirmation of a new family of bulk AMX2 thermoelectric compounds with measured zT reaching 0.8. I will outline future plans for how such data-driven methods can be used to better understand the factors that control thermoelectric behavior, e.g., for the rational design of electronic band structures, in ways that are different from conventional approaches.

  6. Getting To Exascale: Applying Novel Parallel Programming Models To Lab Applications For The Next Generation Of Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Dube, Evi [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shereda, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nau, Lee [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Harris, Lance [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2010-09-27

    As supercomputing moves toward exascale, node architectures will change significantly. CPU core counts on nodes will increase by an order of magnitude or more. Heterogeneous architectures will become more commonplace, with GPUs or FPGAs providing additional computational power. Novel programming models may make better use of on-node parallelism in these new architectures than do current models. In this paper we examine several of these novel models – UPC, CUDA, and OpenCL –to determine their suitability to LLNL scientific application codes. Our study consisted of several phases: We conducted interviews with code teams and selected two codes to port; We learned how to program in the new models and ported the codes; We debugged and tuned the ported applications; We measured results, and documented our findings. We conclude that UPC is a challenge for porting code, Berkeley UPC is not very robust, and UPC is not suitable as a general alternative to OpenMP for a number of reasons. CUDA is well supported and robust but is a proprietary NVIDIA standard, while OpenCL is an open standard. Both are well suited to a specific set of application problems that can be run on GPUs, but some problems are not suited to GPUs. Further study of the landscape of novel models is recommended.

  7. Petascale supercomputing to accelerate the design of high-temperature alloys

    Science.gov (United States)

    Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; Haynes, J. Allen

    2017-12-01

    Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ‧-Al2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviour of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. The approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.

  8. "Sempre tivemos mulheres nos cantos e nas cordas": uma pesquisa sobre o lugar feminino nas corporações musicais

    Directory of Open Access Journals (Sweden)

    Mayara Pacheco Coelho

    2014-04-01

    Full Text Available O presente artigo insere-se em projeto de pesquisa-intervenção sobre a música e suas articulações identitárias nas corporações musicais da região dos Campos das Vertentes, em especial São João del-Rei e cidades vizinhas. Nessa região, a música tem papel significativo na formação da identidade cultural dos cidadãos e na história dos municípios. O recorte atual apresenta uma investigação sobre determinações de gênero, visando conhecer como se dá a participação de musicistas nas bandas e orquestras da região. Para tanto, utilizou-se a análise arqueológica do discurso, a fim de contrapor falas de musicistas às falas de músicos das corporações e, também, às falas masculinas presentes na filosofia e ao discurso utópico sobre a mulher. Observou-se que as diferenças de gênero tradicionais conservam-se encobertas no cotidiano das corporações musicais. Entretanto, observou-se também que as musicistas começam a ser reconhecidas nas corporações e, sobretudo, reconhecem-se como capazes de, nelas, alçarem voos.

  9. Concepts of Integration for UAS Operations in the NAS

    Science.gov (United States)

    Consiglio, Maria C.; Chamberlain, James P.; Munoz, Cesar A.; Hoffler, Keith D.

    2012-01-01

    One of the major challenges facing the integration of Unmanned Aircraft Systems (UAS) in the National Airspace System (NAS) is the lack of an onboard pilot that can comply with the legal requirement identified in the US Code of Federal Regulations (CFR) that pilots see and avoid other aircraft. UAS will be expected to demonstrate the means to perform the function of see and avoid while preserving the safety level of the airspace and the efficiency of the air traffic system. This paper introduces a Sense and Avoid (SAA) concept for integration of UAS into the NAS that is currently being developed by the National Aeronautics and Space Administration (NASA) and identifies areas that require additional experimental evaluation to further inform various elements of the concept. The concept design rests on interoperability principles that take into account both the Air Traffic Control (ATC) environment as well as existing systems such as the Traffic Alert and Collision Avoidance System (TCAS). Specifically, the concept addresses the determination of well clear values that are large enough to avoid issuance of TCAS corrective Resolution Advisories, undue concern by pilots of proximate aircraft and issuance of controller traffic alerts. The concept also addresses appropriate declaration times for projected losses of well clear conditions and maneuvers to regain well clear separation.

  10. New generation of docking programs: Supercomputer validation of force fields and quantum-chemical methods for docking.

    Science.gov (United States)

    Sulimov, Alexey V; Kutov, Danil C; Katkova, Ekaterina V; Ilin, Ivan S; Sulimov, Vladimir B

    2017-11-01

    Discovery of new inhibitors of the protein associated with a given disease is the initial and most important stage of the whole process of the rational development of new pharmaceutical substances. New inhibitors block the active site of the target protein and the disease is cured. Computer-aided molecular modeling can considerably increase effectiveness of new inhibitors development. Reliable predictions of the target protein inhibition by a small molecule, ligand, is defined by the accuracy of docking programs. Such programs position a ligand in the target protein and estimate the protein-ligand binding energy. Positioning accuracy of modern docking programs is satisfactory. However, the accuracy of binding energy calculations is too low to predict good inhibitors. For effective application of docking programs to new inhibitors development the accuracy of binding energy calculations should be higher than 1kcal/mol. Reasons of limited accuracy of modern docking programs are discussed. One of the most important aspects limiting this accuracy is imperfection of protein-ligand energy calculations. Results of supercomputer validation of several force fields and quantum-chemical methods for docking are presented. The validation was performed by quasi-docking as follows. First, the low energy minima spectra of 16 protein-ligand complexes were found by exhaustive minima search in the MMFF94 force field. Second, energies of the lowest 8192 minima are recalculated with CHARMM force field and PM6-D3H4X and PM7 quantum-chemical methods for each complex. The analysis of minima energies reveals the docking positioning accuracies of the PM7 and PM6-D3H4X quantum-chemical methods and the CHARMM force field are close to one another and they are better than the positioning accuracy of the MMFF94 force field. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Refração atmosferica nas medidas Doppler

    OpenAIRE

    Oliveira, Leonardo Castro de

    1990-01-01

    Orientador: Jose Bittencourt de Andrade Dissertação (mestrado) - Universidade Federal do Parana. Setor de Tecnologia Resumo: Esta dissertação tem por objetivo realizar investigações referentes à refração atmosférica nas medidas Doppler. São considerados quatro modelos para correção troposférica, e o modelo de duas frequências para a correção da refração ionosférica . São também testados de diferentes fontes de dados meteorológicos. Todos os testes são feitos utilizando-se o programa GEO...

  12. O Jornalismo nas Rádios Comunitárias

    OpenAIRE

    Rosembach, Cilto José

    2006-01-01

    O presente estudo analisa o jornalismo nas rádios comunitárias a partir do paradigma da comunicação popular, alternativa e da contextualização histórica das rádios comunitárias no Brasil. A programação jornalística de duas rádios comunitárias no Estado de São Paulo é analisada a partir do referencial teórico que elucida a comunicação popular e prioriza os conceitos de jornalismo popular. São analisadas a Rádio Cantareira FM 107,5, de Vila Isabel, distrito de Brasilândia, São...

  13. Ensinar e aprender geografia com/nas redes sociais

    Directory of Open Access Journals (Sweden)

    Élida Pasini Tonetto

    2015-01-01

    Full Text Available Este estudo trata de refletir sobre as potencialidades/operacionalidades das práticas pedagógicas da Geografia na apropriação das redes sociais online. Para isso, analisamos possíveis potencialidades oferecidas pelas redes sociais online para a Geografia e como podem ser operacionalizadas nas práticas pedagógicas com as redes sociais online seu ensino e, também, pensar como elas podem contribuir para ensinar e aprender com mais significância Geografia. Os fios teóricos da pesquisa estão tramados no entendimento de aprendizagem online para emaranhar os conceitos de espaço e ciberespaço, transitando por dois locais fundamentais: o da escola e o das redes. A abordagem metodológica é construída nas trilhas das pesquisas pós-críticas em educação, onde o Facebook é o lócus para analisar as novas formas de comunicar que subjetivam os sujeitos e engendram novos formatos de ensinagem. Os resultados apontam diferentes potencialidades e operacionalidades das redes sociais online, mas que não representam apenas o uso da técnica em sala de aula, mas sim como parte da agenda de busca pela construção de processos de aprendizagens significativos em Geografia, através das redes sociais, que representam uma forma contemporânea de comunicar/interagir presente no cotidiano dos alunos.

  14. Performance evaluation of scientific programs on advanced architecture computers

    International Nuclear Information System (INIS)

    Walker, D.W.; Messina, P.; Baille, C.F.

    1988-01-01

    Recently a number of advanced architecture machines have become commercially available. These new machines promise better cost-performance then traditional computers, and some of them have the potential of competing with current supercomputers, such as the Cray X/MP, in terms of maximum performance. This paper describes an on-going project to evaluate a broad range of advanced architecture computers using a number of complete scientific application programs. The computers to be evaluated include distributed- memory machines such as the NCUBE, INTEL and Caltech/JPL hypercubes, and the MEIKO computing surface, shared-memory, bus architecture machines such as the Sequent Balance and the Alliant, very long instruction word machines such as the Multiflow Trace 7/200 computer, traditional supercomputers such as the Cray X.MP and Cray-2, and SIMD machines such as the Connection Machine. Currently 11 application codes from a number of scientific disciplines have been selected, although it is not intended to run all codes on all machines. Results are presented for two of the codes (QCD and missile tracking), and future work is proposed

  15. Remotely Operated Aircraft (ROA) Impact on the National Airspace System (NAS) Work Package: Automation Impacts of ROA's in the NAS

    Science.gov (United States)

    2005-01-01

    The purpose of this document is to analyze the impact of Remotely Operated Aircraft (ROA) operations on current and planned Air Traffic Control (ATC) automation systems in the En Route, Terminal, and Traffic Flow Management domains. The operational aspects of ROA flight, while similar, are not entirely identical to their manned counterparts and may not have been considered within the time-horizons of the automation tools. This analysis was performed to determine if flight characteristics of ROAs would be compatible with current and future NAS automation tools. Improvements to existing systems / processes are recommended that would give Air Traffic Controllers an indication that a particular aircraft is an ROA and modifications to IFR flight plan processing algorithms and / or designation of airspace where an ROA will be operating for long periods of time.

  16. Interacciones de las proteínas disulfuro isomerasa y de choque térmico Hsc70 con proteínas estructurales recombinantes purificadas de rotavirus

    Directory of Open Access Journals (Sweden)

    Luz Y. Moreno

    2016-01-01

    Full Text Available Introducción. La entrada de rotavirus a las células parece estar mediado por interacciones secuenciales entre las proteínas estructurales virales y algunas moléculas de la superficie celular. Sin embargo, los mecanismos por los cuales el rotavirus infecta la célula diana aún no se comprenden bien. Existe alguna evidencia que muestra que las proteínas estructurales de rotavirus VP5* y VP8* interactúan con algunas moléculas de la superficie celular. La disponibilidad de las proteínas estructurales de rotavirus recombinantes en cantidad suficiente se ha convertido en un aspecto importante para la identificación de las interacciones específicas de los receptores virus-célula durante los eventos tempranos del proceso infeccioso. Objetivo. El propósito del presente trabajo es realizar un análisis de las interacciones entre las proteínas estructurales de rotavirus recombinante VP5*, VP8* y VP6, y las proteínas celulares Hsc70 y PDI utilizando sus versiones recombinantes purificadas. Materiales y métodos. Las proteínas recombinantes de rotavirus VP5* y VP8* y las proteínas recombinantes celulares Hsc70 y PDI se expresaron en E. coli BL21 (DE3, mientras que VP6 se expresó en células MA104 con virus vaccinia recombinante transfectada. La interacción entre el rotavirus y las proteínas celulares se estudió mediante ELISA, co-inmunoprecipitación y SDS-PAGE/ Western. Resultados. Las condiciones óptimas para la expresión de proteínas recombinantes se determinaron y se generaron anticuerpos contra ellas. Los resultados sugirieron que las proteínas virales rVP5* y rVP6 interactúan con Hsc70 y PDI in vitro. También se encontró que éstas proteínas virales recombinantes interactúan con Hsc70 en las balsas lipídicas (“Rafts” en un cultivo celular. El tratamiento de las células, ya sea con DLP o rVP6 produjo significativamente la inhibición de la infección por rotavirus. Conclusión. Los resultados permiten concluir que r

  17. New Mexico High School Supercomputing Challenge, 1990--1995: Five years of making a difference to students, teachers, schools, and communities. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Foster, M.; Kratzer, D.

    1996-02-01

    The New Mexico High School Supercomputing Challenge is an academic program dedicated to increasing interest in science and math among high school students by introducing them to high performance computing. This report provides a summary and evaluation of the first five years of the program, describes the program and shows the impact that it has had on high school students, their teachers, and their communities. Goals and objectives are reviewed and evaluated, growth and development of the program are analyzed, and future directions are discussed.

  18. Super-computer architecture

    CERN Document Server

    Hockney, R W

    1977-01-01

    This paper examines the design of the top-of-the-range, scientific, number-crunching computers. The market for such computers is not as large as that for smaller machines, but on the other hand it is by no means negligible. The present work-horse machines in this category are the CDC 7600 and IBM 360/195, and over fifty of the former machines have been sold. The types of installation that form the market for such machines are not only the major scientific research laboratories in the major countries-such as Los Alamos, CERN, Rutherford laboratory-but also major universities or university networks. It is also true that, as with sports cars, innovations made to satisfy the top of the market today often become the standard for the medium-scale computer of tomorrow. Hence there is considerable interest in examining present developments in this area. (0 refs).

  19. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Weingarten, D.

    1986-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating-point processors arrangedin a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics

  20. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1985-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating- point processors arranged in a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable nonblocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics

  1. Supercomputer debugging workshop `92

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.S.

    1993-02-01

    This report contains papers or viewgraphs on the following topics: The ABCs of Debugging in the 1990s; Cray Computer Corporation; Thinking Machines Corporation; Cray Research, Incorporated; Sun Microsystems, Inc; Kendall Square Research; The Effects of Register Allocation and Instruction Scheduling on Symbolic Debugging; Debugging Optimized Code: Currency Determination with Data Flow; A Debugging Tool for Parallel and Distributed Programs; Analyzing Traces of Parallel Programs Containing Semaphore Synchronization; Compile-time Support for Efficient Data Race Detection in Shared-Memory Parallel Programs; Direct Manipulation Techniques for Parallel Debuggers; Transparent Observation of XENOOPS Objects; A Parallel Software Monitor for Debugging and Performance Tools on Distributed Memory Multicomputers; Profiling Performance of Inter-Processor Communications in an iWarp Torus; The Application of Code Instrumentation Technology in the Los Alamos Debugger; and CXdb: The Road to Remote Debugging.

  2. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1985-01-01

    GF11 is a parallel computer currently under construction at the Yorktown Research Center. The machine incorporates 576 floating-point processors arranged in a modified SIMD architecture. Each processor has space for 2 Mbytes of memory and is capable of 20 MFLOPS, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 GFLOPS. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics, a proposed theory of the elementary particles which participate in nuclear interactions

  3. Algorithms for supercomputers

    International Nuclear Information System (INIS)

    Alder, B.J.

    1986-01-01

    Better numerical procedures, improved computational power and additional physical insights have contributed significantly to progress in dealing with classical and quantum statistical mechanics problems. Past developments are discussed and future possibilities outlined

  4. Algorithms for supercomputers

    International Nuclear Information System (INIS)

    Alder, B.J.

    1985-12-01

    Better numerical procedures, improved computational power and additional physical insights have contributed significantly to progress in dealing with classical and quantum statistical mechanics problems. Past developments are discussed and future possibilities outlined

  5. Pensamento Estratégico nas Organizações

    Directory of Open Access Journals (Sweden)

    Juliane Inês Di Francesco Kich

    2014-08-01

    Full Text Available http://dx.doi.org/10.5007/2175-8077.2014v16n39p134 Este trabalho busca analisar uma nova forma de pensar as estratégias organizacionais, através de uma discussão teórica sobre o termo “pensamento estratégico”, e o seu desenvolvimento nas organizações. Para isso, realizou-se uma pesquisa bibliográfica, com o intento de aprofundar o tema e alcançar um embasamento conceitual, o qual pode subsidiar análises posteriores. Dentre os resultados da pesquisa, destaca-se que as características pragmáticas do planejamento estratégico parecem não ter mais espaço no atual mundo organizacional, esta ferramenta precisa estar interligada ao processo de pensamento estratégico para trazer resultados mais efetivos. Neste sentido, o desafio se apresenta em como as organizações podem desenvolver um planejamento estratégico que incentive o pensamento estratégico ao invés de miná-lo, assim como o desenvolvimento de ferramentas que fomentem a capacidade de pensar estrategicamente em funcionários de todos os níveis hierárquicos

  6. Jornalismo, gêneros e diversidade cultural nas revistas brasileiras

    Directory of Open Access Journals (Sweden)

    Ana Regina Rêgo

    Full Text Available Este artigo apresenta os resultados de uma pesquisa realizada em três publicações jornalísticas brasileiras de caráter cultural, a saber: Cult, Bravo e Brasileiros, que foram analisadas com o objetivo de identificar a visibilidade das manifestações culturais em suas páginas, por meio do mapeamento das matérias veiculadas. Em outro prisma, o intuito foi mapear os gêneros jornalísticos mais trabalhados na veiculação das matérias referentes à cultura, com vistas a identificar o grau de importância dado ao temas culturais retratados nas publicações, assim como, verificar as tendências no texto do Jornalismo Cultural. A metodologia utilizada no primeiro caso foi o diagnóstico simples e no segundo caso foi análise de conteúdo por emparelhamento. Ao final conclui-se que embora o Jornalismo Cultural brasileiro esteja se abrindo para divulgar a diversidade do país, ainda permanece uma predominância destacável dos eventos do sudeste entre os temas pautados pelas publicações citadas.

  7. Investigation of deep levels in GaInNAs

    International Nuclear Information System (INIS)

    Abulfotuh, F.; Balcioglu, A.; Friedman, D.; Geisz, J.; Kurtz, S.

    1999-01-01

    This paper presents and discusses the first Deep-Level transient spectroscopy (DLTS) data obtained from measurements carried out on both Schottky barriers and homojunction devices of GaInNAs. The effect of N and In doping on the electrical properties of the GaNInAs devices, which results in structural defects and interface states, has been investigated. Moreover, the location and densities of deep levels related to the presence of N, In, and N+In are identified and correlated with the device performance. The data confirmed that the presence of N alone creates a high density of shallow hole traps related to the N atom and structural defects in the device. Doping by In, if present alone, also creates low-density deep traps (related to the In atom and structural defects) and extremely deep interface states. On the other hand, the co-presence of In and N eliminates both the interface states and levels related to structural defects. However, the device still has a high density of the shallow and deep traps that are responsible for the photocurrent loss in the GaNInAs device, together with the possible short diffusion length. copyright 1999 American Institute of Physics

  8. CULTURA DE APRENDIZAGEM E DESEMPENHO NAS TV’S CEARENSES

    Directory of Open Access Journals (Sweden)

    Antonia Silva

    2013-12-01

    Full Text Available Este estudo insere-se no campo da Cultura de Aprendizagem Organizacional. Procura-se analisar a relação entre a Cultura de Aprendizagem (CA e o Desempenho Organizacional (DO em emissoras de TVs cearense, na percepção de seus colaboradores. Trata-se de uma survey descritiva, com uma abordagem quantitativa. Na coleta de dados, utilizou-se o questionário “DLOQ-A” desenvolvido por Yang (2003, contendo 27 itens, respondido por 95 indivíduos. Foi aplicado o método de Mínimos Quadrados Ordinários para analisar a correção entre as duas variáveis: cultura de aprendizagem organizacional e desempenho organizacional. Os resultados indicam que o Desempenho Organizacional nas emissoras está fortemente associado ao desempenho financeiro. Os fatores de Cultura de Aprendizagem que possuem maior capacidade explicativa encontram-se no nível individual (oportunidades para a aprendizagem contínua e no nível organizacional (estímulo a liderança estratégica para a aprendizagem; e desenvolvimento da visão sistêmica da organização. Em síntese, a CA exerce forte influência no desempenho organizacional com o coeficiente de regressão ( de 0,763.

  9. Production technology of an electrolyte for Na/S batteries

    Science.gov (United States)

    Heimke, G.; Mayer, H.; Reckziegel, A.

    1982-05-01

    The trend to develop a cheap electrochemical electric battery and the development of the Na/S system are discussed. The main element in this type of battery is the beta Al2O3 solid electrolyte. Characteristics for this material of first importance are: specific surface, density of green and of sintered material, absence of cracks, gas permeability, resistance to flexion, purity, electrical conductivity, crystal structure and dimensions. Influence of production method on all these characteristics were investigated, e.g., method of compacting powder, tunnel kiln sintering versus static chamber furnace sintering, sintering inside a container or not, and type of kiln material when sintering in a container. In the stationary chamber furnace, beta alumina ceramics were produced with a density of 3.2 g/cm3, a mechanical strength higher than 160 MPa, and an electrical conductivity of about 0.125 Ohm-1cm-1 at 300 C. The best kiln material proved to be MgO and MgAl2O3.MgO ceramics.

  10. Investigation of Deep Levels in GaInNas

    International Nuclear Information System (INIS)

    Balcioglu, A.; Friedman, D.; Abulfotuh, F.; Geisz, J.; Kurtz, S.

    1998-01-01

    This paper presents and discusses the first Deep-Level transient spectroscopy (DLTS) data obtained from measurements carried out on both Schottky barriers and homojunction devices of GaInNAs. The effect of N and In doping on the electrical properties of the GaNInAs devices, which results in structural defects and interface states, has been investigated. Moreover, the location and densities of deep levels related to the presence of N, In, and N+In are identified and correlated with the device performance. The data confirmed that the presence of N alone creates a high density of shallow hole traps related to the N atom and structural defects in the device. Doping by In, if present alone, also creates low-density deep traps (related to the In atom and structural defects) and extremely deep interface states. On the other hand, the co-presence of In and N eliminates both the interface states and levels related to structural defects. However, the device still has a high density of the shallow and deep traps that are responsible for the photocurrent loss in the GaNInAs device, together with the possible short diffusion length

  11. Estruturas de poder nas redes de financiamento político nas eleições de 2010 no Brasil

    Directory of Open Access Journals (Sweden)

    Rodrigo Rossi Horochovski

    2016-04-01

    Full Text Available Resumo Este artigo analisa os 299.968 relacionamentos estabelecidos entre os 251.665 doadores e/ou receptores de recursos financeiros legais abrangidos pelas prestações de contas das campanhas nas eleições de 2010 no Brasil, englobando todos os candidatos e partidos. Aplica-se aos dados do Tribunal Superior Eleitoral (TSE a metodologia de análise de redes sociais e tratamentos estatísticos complementares para a exploração da topologia das sub-redes (componentes e dos cálculos de centralidade dos atores – candidatos, agentes partidários e financiadores privados. Os resultados expõem a alta conectividade e assimetria da rede de financiamento eleitoral no Brasil e mostram que o posicionamento dos atores em estratos da rede é determinante para o desempenho tanto de candidatos quanto de financiadores, revelando, de uma forma inédita, uma elite no poder político-eleitoral brasileiro.

  12. Computational fluid dynamics: complex flows requiring supercomputers. January 1975-July 1988 (Citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Report for January 1975-July 1988

    International Nuclear Information System (INIS)

    1988-08-01

    This bibliography contains citations concerning computational fluid dynamics (CFD), a new method in computational science to perform complex flow simulations in three dimensions. Applications include aerodynamic design and analysis for aircraft, rockets, and missiles, and automobiles; heat-transfer studies; and combustion processes. Included are references to supercomputers, array processors, and parallel processors where needed for complete, integrated design. Also included are software packages and grid-generation techniques required to apply CFD numerical solutions. Numerical methods for fluid dynamics, not requiring supercomputers, are found in a separate published search. (Contains 83 citations fully indexed and including a title list.)

  13. Advanced Interval Management: A Benefit Analysis

    Science.gov (United States)

    Timer, Sebastian; Peters, Mark

    2016-01-01

    This document is the final report for the NASA Langley Research Center (LaRC)- sponsored task order 'Possible Benefits for Advanced Interval Management Operations.' Under this research project, Architecture Technology Corporation performed an analysis to determine the maximum potential benefit to be gained if specific Advanced Interval Management (AIM) operations were implemented in the National Airspace System (NAS). The motivation for this research is to guide NASA decision-making on which Interval Management (IM) applications offer the most potential benefit and warrant further research.

  14. A Inovação nas Empresas de Caruaru

    Directory of Open Access Journals (Sweden)

    Lucas Felipe Pereira Torres

    2014-12-01

    Full Text Available A discussão em torno da inovação é tema de grande relevância na atualidade. Trata-se de um processo que, ao mesmo tempo, é sistemático, composto de riscos e desafios assumidos na busca por resultados positivos, seja pelo investimento em pesquisa, desenvolvimento e capacitação dos profissionais, pela redução de gastos com produção ou processos, aumento do faturamento ou alcance de um novo mercado. Desse modo, este artigo tem como objetivo geral colher evidências empíricas da percepção dos empreendedores do município de Caruaru – PE sobre inovação. A metodologia utilizada é classificada como survey exploratória. Os dados foram coletados mediante questionários fechados enviados via e-mail aos empreendedores de Caruaru-PE. Ao fim da pesquisa, processaram-se a codificação e a tabulação dos dados, seguida pela descrição e análise dos mesmos. A análise dos dados foi feita através do programa Microsoft Excel 2010, onde após o tratamento dos dados, obtiveram-se os gráficos com as variáveis pesquisadas. Como principais resultados da pesquisa destacam-se a identificação do foco e das características da inovação nas empresas de Caruaru-PE, bem como o perfil do empreendedor inovador caruaruense.

  15. Nonlinear Circuits and Neural Networks: Chip Implementation and Applications of the TeraOPS CNN Dynamic Array Supercomputer

    National Research Council Canada - National Science Library

    Chua, L

    1998-01-01

    .... Advances in research have been made in the following areas: (1) The design and implementation of the first-ever ARAM in the CNN Chip Set Architecture was successfully competed, and the samples were successfully tested; (2...

  16. O USO DO TWITTER NAS ELEIÇÕES PRESIDENCIAIS NO BRASIL EM 2010

    OpenAIRE

    MARCOS FRANCISCO SOARES FERREIRA

    2012-01-01

    O presente trabalho de dissertação de mestrado tem como objeto de estudo O uso do Twitter nas eleições presidenciais de 2010. Apresenta como objetivos principais: 1) Analisar o Twitter como ferramenta de comunicação nas eleições. 2) Identificar o uso do Twitter pelos três principais candidatos a presidência da república no Brasil nas eleições presidenciais em 2010. Trata-se de uma pesquisa exploratória, que sistematiza dados empíricos coletados diretamente do Twitter dos candidatos pesquisado...

  17. Neonatal Abstinence Syndrome (NAS) in Southwestern Border States: Examining Trends, Population Correlates, and Implications for Policy.

    Science.gov (United States)

    Hussaini, Khaleel S; Garcia Saavedra, Luigi F

    2018-03-23

    Introduction Neonatal abstinence syndrome (NAS) is withdrawal syndrome in newborns following birth and is primarily caused by maternal drug use during pregnancy. This study examines trends, population correlates, and policy implications of NAS in two Southwest border states. Materials and Methods A cross-sectional analysis of Hospital Inpatient Discharge Data (HIDD) was utilized to examine the incidence of NAS in the Southwest border states of Arizona (AZ) and New Mexico (NM). All inpatient hospital births in AZ and NM from January 1, 2008 through December 31, 2013 with ICD9-CM codes for NAS (779.5), cocaine (760.72), or narcotics (760.75) were extracted. Results During 2008-2013 there were 1472 NAS cases in AZ and 888 in NM. The overall NAS rate during this period was 2.83 per 1000 births (95% CI 2.68-2.97) in AZ and 5.31 (95% CI 4.96-5.66) in NM. NAS rates increased 157% in AZ and 174% in NM. NAS newborns were more likely to have low birth weight, have respiratory distress, more likely to have feeding difficulties, and more likely to be on state Medicaid insurance. AZ border region (border with Mexico) had NAS rates significantly higher than the state rate (4.06 per 1000 births [95% CI 3.68-4.44] vs. 2.83 [95% CI 2.68-2.97], respectively). In NM, the border region rate (2.09 per 1000 births [95% CI 1.48-2.69]) was significantly lower than the state rate (5.31 [95% CI 4.96-5.66]). Conclusions Despite a dramatic increase in the incidence of NAS in the U.S. and, in particular, the Southwest border states of AZ and NM, there is still scant research on the overall incidence of NAS, its assessment in the southwest border, and associated long-term outcomes. The Healthy Border (HB) 2020 binational initiative of the U.S.-Mexico Border Health Commission is an initiative that addresses several public health priorities that not only include chronic and degenerative diseases, infectious diseases, injury prevention, maternal and child health but also mental health and

  18. Advanced Ceramics

    International Nuclear Information System (INIS)

    1989-01-01

    The First Florida-Brazil Seminar on Materials and the Second State Meeting about new materials in Rio de Janeiro State show the specific technical contribution in advanced ceramic sector. The others main topics discussed for the development of the country are the advanced ceramic programs the market, the national technic-scientific capacitation, the advanced ceramic patents, etc. (C.G.C.) [pt

  19. Annabela Rita, Fernando Cristóvão (e ds., Daniela Marcheschi (Prefácio. Fabricar a Inovação – O Processo Criativo em Questão nas Ciências, nas Letras e nas Artes

    Directory of Open Access Journals (Sweden)

    Luísa Marinho Antunes

    2017-06-01

    Full Text Available Recensione del volume Annabela Rita e Fernando Cristóvão (eds. Fabricar a Inovação – O Processo Criativo em Questão nas Ciências, nas Letras e nas Artes, coord. Prefácio Daniela Marcheschi. Lisboa: Gradiva, 2016. Stampa (pp.396 seguita dalla versione italiana della Prefazione.

  20. A LITISPENDÊNCIA NAS AÇÕES COLETIVAS

    Directory of Open Access Journals (Sweden)

    Maria Carolina Florentino Lascala

    2010-12-01

    Full Text Available Os interesses transindividuais reclamam a adaptação das regras do tradicional processo civil, pensado e elaborado para tutelar o interesse particular. Neste artigo, será abordada a litispendência nas ações coletivas, suas características, efeitos e particularidades. Litispendência  é  a repetição de ação em curso. Em relação às ações coletivas, este fenômeno processual pode existir mesmo que a segunda ação seja proposta por  autor diverso. Isso porque, no pólo ativo da  demanda coletiva, a parte está em juízo defendendo interesse alheio, de grupo determinável ou indeterminável. Então, ainda que haja legitimadosdiversos no pólo ativo, buscando o mesmo interesse coletivo, na verdade, ambos os autores estão em juízo representando a mesma coletividade. É que,  no campo da legitimação extraordinária, ainda que a ação seja proposta por autor diferente, o titular do direito material estará igualmente representado,  havendo, portanto, repetição da causa em juízo. Com isso, pode-se afirmar haver litispendência dessas ações. Apesar de haver litispendência, o intuito do processo coletivo é a busca da verdade real e, por isso, seria prejudicial a extinção de uma das demandas. Portanto,  o que se quer provar é que o efeito típico da conexão pode ser aqui aplicado, ou seja, o efeito de reunião das ações para julgamento conjunto. Esta é a solução que mais atende ao resultado útil do processo na tutela coletiva.

  1. O EFEITO CHAMARIZ NAS DECISÕES DE INVESTIMENTO

    Directory of Open Access Journals (Sweden)

    César Augusto Tibúrcio Silva

    2012-03-01

    Full Text Available O estudo das fi nanças comportamentais vem ganhando destaque e sendo objeto de diversos trabalhos acadêmicos. Muitos conceitos tradicionais de economia, sobre o homem econômico, têm sido questionados, ao se considerarem aspectos observados na prática. Principalmente, a partir dos modernos estudos de economia comportamental, aponta-se que o homem se comporta de maneira enviesada e irracional, para tomar suas decisões. É infl uenciado por detalhes que podem levá-lo a fazer escolhas menos vantajosas, relacionadas ao seu dinheiro, e nem mesmo se dar conta disso. O efeito chamariz refere-se à infl uência que um item, a ser escolhido, exerce sobre aqueles que farão uma escolha, inclusive levando- os a tomar uma decisão que, anteriormente, era duvidosa, ou até mesmo a mudar uma escolha anterior. Esse efeito mostra como o homem cria falsas bases de comparação, para simplifi car um ambiente de escolha entre opções. O trabalho faz uma pesquisa sobre uma decisão de investimento, em que há duas situações, com a presença ou não de um chamariz, algo que pode infl uenciar a decisão fi nal dos investidores. Participaram da pesquisa 386 alunos de graduação do curso de Ciências Contábeis, de cinco universidades, do Distrito Federal. Os alunos foram considerados investidores capazes de analisar índices de liquidez e de julgar qual seria a melhor opção para investir. Viu-se que as pessoas se sentem incentivadas a investir em empresas nas quais foi criada uma falsa base de comparação, pela presença do chamariz. Esse efeito é observado com maior intensidade em alguns grupos específi cos de investidores. Sugere-se a elaboração de novas pesquisas na área, com o objetivo de observar esse efeito em outras situações e de divulgar o estudo das fi nanças comportamentais.

  2. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    Science.gov (United States)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  3. Low temperature grown GaNAsSb: A promising material for photoconductive switch application

    Energy Technology Data Exchange (ETDEWEB)

    Tan, K. H.; Yoon, S. F.; Wicaksono, S.; Loke, W. K.; Li, D. S. [School of Electrical and Electronic Engineering, Nanyang Technological University, Nanyang Avenue, Singapore 639798 (Singapore); Saadsaoud, N.; Tripon-Canseliet, C. [Laboratoire d' Electronique et Electromagnétisme, Pierre and Marie Curie University, 4 Place Jussieu, 75005 Paris (France); Lampin, J. F.; Decoster, D. [Institute of Electronics, Microelectronics and Nanotechnology (IEMN), UMR CNRS 8520, Universite des Sciences et Technologies de Lille, BP 60069, 59652 Villeneuve d' Ascq Cedex (France); Chazelas, J. [Thales Airborne Systems, 2 Avenue Gay Lussac, 78852 Elancourt (France)

    2013-09-09

    We report a photoconductive switch using low temperature grown GaNAsSb as the active material. The GaNAsSb layer was grown at 200 °C by molecular beam epitaxy in conjunction with a radio frequency plasma-assisted nitrogen source and a valved antimony cracker source. The low temperature growth of the GaNAsSb layer increased the dark resistivity of the switch and shortened the carrier lifetime. The switch exhibited a dark resistivity of 10{sup 7} Ω cm, a photo-absorption of up to 2.1 μm, and a carrier lifetime of ∼1.3 ps. These results strongly support the suitability of low temperature grown GaNAsSb in the photoconductive switch application.

  4. Development of GaInNAsSb alloys: Growth, band structure, optical properties and applications

    International Nuclear Information System (INIS)

    Harris, James S. Jr.; Kudrawiec, R.; Yuen, H.B.; Bank, S.R.; Bae, H.P.; Wistey, M.A.; Jackrel, D.; Pickett, E.R.; Sarmiento, T.; Goddard, L.L.; Lordi, V.; Gugov, T.

    2007-01-01

    In the past few years, GaInNAsSb has been found to be a potentially superior material to both GaInNAs and InGaAsP for communications wavelength laser applications. It has been observed that due to the surfactant role of antimony during epitaxy, higher quality material can be grown over the entire 1.2-1.6 μm range on GaAs substrates. In addition, it has been discovered that antimony in GaInNAsSb also works as a constituent that significantly modifies the valence band. These findings motivated a systematic study of GaInNAsSb alloys with widely varying compositions. Our recent progress in growth and materials development of GaInNAsSb alloys and our fabrication of 1.5-1.6 μm lasers are discussed in this paper. We review our recent studies of the conduction band offset in (Ga,In) (N,As,Sb)/GaAs quantum wells and discuss the growth challenges of GaInNAsSb alloys. Finally, we report record setting long wavelength edge emitting lasers and the first monolithic VCSELs operating at 1.5 μm based on GaInNAsSb QWs grown on GaAs. Successful development of GaInNAsSb alloys for lasers has led to a much broader range of potential applications for this material including: solar cells, electroabsorption modulators, saturable absorbers and far infrared optoelectronic devices and these are also briefly discussed in this paper. (copyright 2007 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  5. Development of GaInNAsSb alloys: Growth, band structure, optical properties and applications

    Energy Technology Data Exchange (ETDEWEB)

    Harris, James S. Jr.; Kudrawiec, R.; Yuen, H.B.; Bank, S.R.; Bae, H.P.; Wistey, M.A.; Jackrel, D.; Pickett, E.R.; Sarmiento, T.; Goddard, L.L.; Lordi, V.; Gugov, T. [Solid State and Photonics Laboratory, Stanford University, CIS-X 328, Via Ortega, Stanford, California 94305-4075 (United States)

    2007-08-15

    In the past few years, GaInNAsSb has been found to be a potentially superior material to both GaInNAs and InGaAsP for communications wavelength laser applications. It has been observed that due to the surfactant role of antimony during epitaxy, higher quality material can be grown over the entire 1.2-1.6 {mu}m range on GaAs substrates. In addition, it has been discovered that antimony in GaInNAsSb also works as a constituent that significantly modifies the valence band. These findings motivated a systematic study of GaInNAsSb alloys with widely varying compositions. Our recent progress in growth and materials development of GaInNAsSb alloys and our fabrication of 1.5-1.6 {mu}m lasers are discussed in this paper. We review our recent studies of the conduction band offset in (Ga,In) (N,As,Sb)/GaAs quantum wells and discuss the growth challenges of GaInNAsSb alloys. Finally, we report record setting long wavelength edge emitting lasers and the first monolithic VCSELs operating at 1.5 {mu}m based on GaInNAsSb QWs grown on GaAs. Successful development of GaInNAsSb alloys for lasers has led to a much broader range of potential applications for this material including: solar cells, electroabsorption modulators, saturable absorbers and far infrared optoelectronic devices and these are also briefly discussed in this paper. (copyright 2007 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  6. Dois caras numa garagem: o cinema alternativo dos fãs de Guerra nas Estrelas

    Directory of Open Access Journals (Sweden)

    Tietzmann, Roberto

    2003-01-01

    Full Text Available A série de filmes Guerra nas Estrelas faz parte do imaginário popular de uma forma desproporcional em relação às centenas de filmes lançados anualmente pela indústria audiovisual. Isso encontra eco nas bilheterias dos filmes da série e na fidelidade dos fâs ao universo de fantasia criado por George Lucas.

  7. Summaries of research and development activities by using supercomputer system of JAEA in FY2015. April 1, 2015 - March 31, 2016

    International Nuclear Information System (INIS)

    2017-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As shown in the fact that about 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology. In FY2015, the system was used for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue, as well as for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great number of R and D results accomplished by using the system in FY2015, as well as user support, operational records and overviews of the system, and so on. (author)

  8. Summaries of research and development activities by using supercomputer system of JAEA in FY2014. April 1, 2014 - March 31, 2015

    International Nuclear Information System (INIS)

    2016-02-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As shown in the fact that about 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology. In FY2014, the system was used for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue, as well as for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great number of R and D results accomplished by using the system in FY2014, as well as user support, operational records and overviews of the system, and so on. (author)

  9. Summaries of research and development activities by using supercomputer system of JAEA in FY2013. April 1, 2013 - March 31, 2014

    International Nuclear Information System (INIS)

    2015-02-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. About 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2013, the system was used not only for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science, but also for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue. This report presents a great amount of R and D results accomplished by using the system in FY2013, as well as user support, operational records and overviews of the system, and so on. (author)

  10. The design and implementation of cost-effective algorithms for direct solution of banded linear systems on the vector processor system 32 supercomputer

    Science.gov (United States)

    Samba, A. S.

    1985-01-01

    The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.

  11. Summaries of research and development activities by using supercomputer system of JAEA in FY2012. April 1, 2012 - March 31, 2013

    International Nuclear Information System (INIS)

    2014-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As more than 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2012, the system was used not only for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science, but also for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as apriority issue. This report presents a great amount of R and D results accomplished by using the system in FY2012, as well as user support, operational records and overviews of the system, and so on. (author)

  12. Summaries of research and development activities by using supercomputer system of JAEA in FY2011. April 1, 2011 - March 31, 2012

    International Nuclear Information System (INIS)

    2013-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As more than 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2011, the system was used for analyses of the accident at the Fukushima Daiichi Nuclear Power Station and establishment of radioactive decontamination plan, as well as the JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great amount of R and D results accomplished by using the system in FY2011, as well as user support structure, operational records and overviews of the system, and so on. (author)

  13. Use of QUADRICS supercomputer as embedded simulator in emergency management systems; Utilizzo del calcolatore QUADRICS come simulatore in linea in un sistema di gestione delle emergenze

    Energy Technology Data Exchange (ETDEWEB)

    Bove, R.; Di Costanzo, G.; Ziparo, A. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dip. Energia

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system.

  14. Sandia`s network for Supercomputing `94: Linking the Los Alamos, Lawrence Livermore, and Sandia National Laboratories using switched multimegabit data service

    Energy Technology Data Exchange (ETDEWEB)

    Vahle, M.O.; Gossage, S.A.; Brenkosh, J.P. [Sandia National Labs., Albuquerque, NM (United States). Advanced Networking Integration Dept.

    1995-01-01

    Supercomputing `94, a high-performance computing and communications conference, was held November 14th through 18th, 1994 in Washington DC. For the past four years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1994 conference, Sandia built a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second linking its private SMDS network between its facilities in Albuquerque, New Mexico and Livermore, California to the convention center in Washington, D.C. For the show, the network was also extended from Sandia, New Mexico to Los Alamos National Laboratory and from Sandia, California to Lawrence Livermore National Laboratory. This paper documents and describes this network and how it was used at the conference.

  15. Testing New Programming Paradigms with NAS Parallel Benchmarks

    Science.gov (United States)

    Jin, H.; Frumkin, M.; Schultz, M.; Yan, J.

    2000-01-01

    Over the past decade, high performance computing has evolved rapidly, not only in hardware architectures but also with increasing complexity of real applications. Technologies have been developing to aim at scaling up to thousands of processors on both distributed and shared memory systems. Development of parallel programs on these computers is always a challenging task. Today, writing parallel programs with message passing (e.g. MPI) is the most popular way of achieving scalability and high performance. However, writing message passing programs is difficult and error prone. Recent years new effort has been made in defining new parallel programming paradigms. The best examples are: HPF (based on data parallelism) and OpenMP (based on shared memory parallelism). Both provide simple and clear extensions to sequential programs, thus greatly simplify the tedious tasks encountered in writing message passing programs. HPF is independent of memory hierarchy, however, due to the immaturity of compiler technology its performance is still questionable. Although use of parallel compiler directives is not new, OpenMP offers a portable solution in the shared-memory domain. Another important development involves the tremendous progress in the internet and its associated technology. Although still in its infancy, Java promisses portability in a heterogeneous environment and offers possibility to "compile once and run anywhere." In light of testing these new technologies, we implemented new parallel versions of the NAS Parallel Benchmarks (NPBs) with HPF and OpenMP directives, and extended the work with Java and Java-threads. The purpose of this study is to examine the effectiveness of alternative programming paradigms. NPBs consist of five kernels and three simulated applications that mimic the computation and data movement of large scale computational fluid dynamics (CFD) applications. We started with the serial version included in NPB2.3. Optimization of memory and cache usage

  16. A crise do jornalismo: ecos e silêncios nas práticas e nas narrativas

    Directory of Open Access Journals (Sweden)

    Christa Liselote Berger Ramos Kuschick

    2015-12-01

    Full Text Available Pesquisadores e jornalistas dedicam-se a compreender que tensionamentos abalam o sistema de produção de sentido que até então ostentava certa hegemonia como discurso que representa um presente social de referência (GOMIS, 1999. Este artigo reflete sobre o modo como a crise do jornalismo tem aparecido nos discursos e nas práticas da própria imprensa. A suspeita inicial é a de que a crise configura-se em acontecimento silenciado pela mídia hegemônica. Por outro lado, inevitavelmente ela transparece também nas práticas jornalísticas, uma vez que tem atingido de forma intensa a estrutura de funcionamento das redações. Além disso, tem provocado os jornalistas a reverem suas competências e o campo a transformar - de certo modo - seus pressupostos e modos de fazer. PALAVRAS-CHAVE: crise do jornalismo; práticas; hegemonia; futuro do jornalismo.    ABSTRACT Researchers and journalists are dedicated to understand the tensions that shake the production system of journalism, which has had certain hegemony as social reference speech  (GOMIS, 1999. This article reflects on how the crisis journalism has appeared in speeches and in the press itself practices. The initial suspicion is that the crisis sets in muted event by the mainstream media. Moreover, it inevitably also transpires in newspaper practice, once it has reached the working structure of essays. It has caused journalists to review their skills and transform the field - in a way - their assumptions and ways of doing. KEYWORDS: journalism crisis; practices; hegemony; future of journalism.   RESUMEN Los investigadores y periodistas se dedican a entender las tensiones que sacuden el sistema de producción de sentidos del periodismo que hasta ahora se jactó cierta hegemonia. En este artículo se reflexiona sobre cómo ha aparecido la crisis del periodismo en los discursos y en las prácticas de la prensa. La sospecha inicial es que la crisis ha sido silenciada por los grandes

  17. [Methods in neonatal abstinence syndrome (NAS): results of a nationwide survey in Austria].

    Science.gov (United States)

    Bauchinger, S; Sapetschnig, I; Danda, M; Sommer, C; Resch, B; Urlesberger, B; Raith, W

    2015-08-01

    Neonatal abstinence syndrome (NAS) occurs in neonates whose mothers have taken addictive drugs or were under substitution therapy during pregnancy. Incidence numbers of NAS are on the rise globally, even in Austria NAS is not rare anymore. The aim of our survey was to reveal the status quo of dealing with NAS in Austria. A questionnaire was sent to 20 neonatology departments all over Austria, items included questions on scoring, therapy, breast-feeding and follow-up procedures. The response rate was 95%, of which 94.7% had written guidelines concerning NAS. The median number of children being treated per year for NAS was 4. Finnegan scoring system is used in 100% of the responding departments. Morphine is being used most often, in opiate abuse (100%) as well as in multiple substance abuse (44.4%). The most frequent forms of morphine preparation are morphine and diluted tincture of opium. Frequency as well as dosage of medication vary broadly. 61.1% of the departments supported breast-feeding, regulations concerned participation in a substitution programme and general contraindications (HIV, HCV, HBV). Our results revealed that there is a big west-east gradient in patients being treated per year. NAS is not a rare entity anymore in Austria (up to 50 cases per year in Vienna). Our survey showed that most neonatology departments in Austria treat their patients following written guidelines. Although all of them base these guidelines on international recommendations there is no national consensus. © Georg Thieme Verlag KG Stuttgart · New York.

  18. Fiscal 2000 report on advanced parallelized compiler technology. Outlines; 2000 nendo advanced heiretsuka compiler gijutsu hokokusho (Gaiyo hen)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    Research and development was carried out concerning the automatic parallelized compiler technology which improves on the practical performance, cost/performance ratio, and ease of operation of the multiprocessor system now used for constructing supercomputers and expected to provide a fundamental architecture for microprocessors for the 21st century. Efforts were made to develop an automatic multigrain parallelization technology for extracting multigrain as parallelized from a program and for making full use of the same and a parallelizing tuning technology for accelerating parallelization by feeding back to the compiler the dynamic information and user knowledge to be acquired during execution. Moreover, a benchmark program was selected and studies were made to set execution rules and evaluation indexes for the establishment of technologies for subjectively evaluating the performance of parallelizing compilers for the existing commercial parallel processing computers, which was achieved through the implementation and evaluation of the 'Advanced parallelizing compiler technology research and development project.' (NEDO)

  19. UAS in the NAS: Survey Responses by ATC, Manned Aircraft Pilots, and UAS Pilots

    Science.gov (United States)

    Comstock, James R., Jr.; McAdaragh, Raymon; Ghatas, Rania W.; Burdette, Daniel W.; Trujillo, Anna C.

    2014-01-01

    NASA currently is working with industry and the Federal Aviation Administration (FAA) to establish future requirements for Unmanned Aircraft Systems (UAS) flying in the National Airspace System (NAS). To work these issues NASA has established a multi-center "UAS Integration in the NAS" project. In order to establish Ground Control Station requirements for UAS, the perspective of each of the major players in NAS operations was desired. Three on-line surveys were administered that focused on Air Traffic Controllers (ATC), pilots of manned aircraft, and pilots of UAS. Follow-up telephone interviews were conducted with some survey respondents. The survey questions addressed UAS control, navigation, and communications from the perspective of small and large unmanned aircraft. Questions also addressed issues of UAS equipage, especially with regard to sense and avoid capabilities. From the civilian ATC and military ATC perspectives, of particular interest are how mixed operations (manned / UAS) have worked in the past and the role of aircraft equipage. Knowledge gained from this information is expected to assist the NASA UAS Integration in the NAS project in directing research foci thus assisting the FAA in the development of rules, regulations, and policies related to UAS in the NAS.

  20. Computational mechanics - Advances and trends; Proceedings of the Session - Future directions of Computational Mechanics of the ASME Winter Annual Meeting, Anaheim, CA, Dec. 7-12, 1986

    Science.gov (United States)

    Noor, Ahmed K. (Editor)

    1986-01-01

    The papers contained in this volume provide an overview of the advances made in a number of aspects of computational mechanics, identify some of the anticipated industry needs in this area, discuss the opportunities provided by new hardware and parallel algorithms, and outline some of the current government programs in computational mechanics. Papers are included on advances and trends in parallel algorithms, supercomputers for engineering analysis, material modeling in nonlinear finite-element analysis, the Navier-Stokes computer, and future finite-element software systems.

  1. ADVANCE PAYMENTS

    CERN Multimedia

    Human Resources Division

    2002-01-01

    Administrative Circular Nº 8 makes provision for the granting of advance payments, repayable in several monthly instalments, by the Organization to the members of its personnel. Members of the personnel are reminded that these advances are only authorized in exceptional circumstances and at the discretion of the Director-General. In view of the current financial situation of the Organization, and in particular the loans it will have to incur, the Directorate has decided to restrict the granting of such advances to exceptional or unforeseen circumstances entailing heavy expenditure and more specifically those pertaining to social issues. Human Resources Division Tel. 73962

  2. Advance payments

    CERN Multimedia

    Human Resources Division

    2003-01-01

    Administrative Circular N 8 makes provision for the granting of advance payments, repayable in several monthly instalments, by the Organization to the members of its personnel. Members of the personnel are reminded that these advances are only authorized in exceptional circumstances and at the discretion of the Director-General. In view of the current financial situation of the Organization, and in particular the loans it will have to incur, the Directorate has decided to restrict the granting of such advances to exceptional or unforeseen circumstances entailing heavy expenditure and more specifically those pertaining to social issues. Human Resources Division Tel. 73962

  3. LAS PROTEÍNAS DESORDENADAS Y SU FUNCIÓN: UNA NUEVA FORMA DE VER LA ESTRUCTURA DE LAS PROTEÍNAS Y LA RESPUESTA DE LAS PLANTAS AL ESTRÉS

    OpenAIRE

    César Luis Cuevas-Velázquez; Alejandra A. Covarrubias-Robles

    2011-01-01

    El dogma que relaciona la función de una proteína con una estructura tridimensional definida ha sido desafiado durante los últimos años por el descubrimiento y caracterización de las proteínas conocidas como proteínas no estructuradas o desordenadas. Estas proteínas poseen una elevada flexibilidad estructural la cual les permite adoptar estructuras diferentes y, por tanto, reconocer ligandos diversos conservando la especificidad en el reconocimiento de los mismos. A las proteínas de este tipo...

  4. Advanced Electronics

    Science.gov (United States)

    2017-07-21

    AFRL-RV-PS- AFRL-RV-PS- TR-2017-0114 TR-2017-0114 ADVANCED ELECTRONICS Ashwani Sharma 21 Jul 2017 Interim Report APPROVED FOR PUBLIC RELEASE...NUMBER Advanced Electronics 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62601F 6. AUTHOR(S) 5d. PROJECT NUMBER 4846 Ashwani Sharma 5e. TASK NUMBER...Approved for public release; distribution is unlimited. (RDMX-17-14919 dtd 20 Mar 2018) 13. SUPPLEMENTARY NOTES 14. ABSTRACT The Space Electronics

  5. Examination of Frameworks for Safe Integration of Intelligent Small UAS into the NAS

    Science.gov (United States)

    Logan, Michael J.

    2012-01-01

    This paper discusses a proposed framework for the safe integration of small unmanned aerial systems (sUAS) into the National Airspace System (NAS). The paper briefly examines the potential uses of sUAS to build an understanding of the location and frequency of potential future flight operations based on the future applications of the sUAS systems. The paper then examines the types of systems that would be required to meet the application-level demand to determine "classes" of platforms and operations. A framework for categorization of the "intelligence" level of the UAS is postulated for purposes of NAS integration. Finally, constraints on the intelligent systems are postulated to ensure their ease of integration into the NAS.

  6. Improved performance in GaInNAs solar cells by hydrogen passivation

    International Nuclear Information System (INIS)

    Fukuda, M.; Whiteside, V. R.; Keay, J. C.; Meleco, A.; Sellers, I. R.; Hossain, K.; Golding, T. D.; Leroux, M.; Al Khalfioui, M.

    2015-01-01

    The effect of UV-activated hydrogenation on the performance of GaInNAs solar cells is presented. A proof-of-principle investigation was performed on non-optimum GaInNAs cells, which allowed a clearer investigation of the role of passivation on the intrinsic nitrogen-related defects in these materials. Upon optimized hydrogenation of GaInNAs, a significant reduction in the presence of defect and impurity based luminescence is observed as compared to that of unpassivated reference material. This improvement in the optical properties is directly transferred to an improved performance in solar cell operation, with a more than two-fold improvement in the external quantum efficiency and short circuit current density upon hydrogenation. Temperature dependent photovoltaic measurements indicate a strong contribution of carrier localization and detrapping processes, with non-radiative processes dominating in the reference materials, and evidence for additional strong radiative losses in the hydrogenated solar cells

  7. Documents assignment to archival fonds in research institutions of the NAS of Ukraine

    Directory of Open Access Journals (Sweden)

    Sichova O.

    2015-01-01

    Full Text Available The article analyzes the main aspects of the assignment of the records of research institutions of the NAS of Ukraine to archival fonds, in particular, the records assignment to archival fonds according to certain characteristics, archival fonds creation in accordance with the scientific principles of provenance, continuity and archival fond integrity. Shown are the features of the internal systematization of the documents of research institutions of the NAS of Ukraine, caused by the specifics of the institutions functions. Illustrated are the examples of institutional archival fonds acquiring names and the conditions leading to their renaming. Analyzed is the procedure of a chronological scope fixation of a research institution of the NAS of Ukraine archival fond

  8. Invalidez por dor nas costas entre segurados da Previdência Social do Brasil

    Directory of Open Access Journals (Sweden)

    Ney Meziat Filho

    2011-06-01

    Full Text Available OBJETIVO: Descrever as aposentadorias por invalidez decorrente de dor nas costas. MÉTODOS: Estudo descritivo com dados do Sistema Único de Informações de Benefícios e dos Anuários Estatísticos da Previdência Social em 2007. A taxa de incidência de dor nas costas como causa das aposentadorias por invalidez foi calculada segundo as variáveis idade e sexo, nos estados. Os dias de trabalho perdidos por invalidez decorrente de dor nas costas foram calculados segundo atividade profissional. RESULTADOS: A dor nas costas idiopática foi a primeira causa de invalidez entre as aposentadorias previdenciárias e acidentárias. A maioria dos beneficiários residia em área urbana e era comerciário. A taxa de incidência de dor nas costas como causa das aposentadorias por invalidez no Brasil foi de 29,96 por 100.000 contribuintes. Esse valor foi mais elevado entre os homens e entre as pessoas mais velhas. Rondônia exibiu taxa quatro vezes superior ao esperado (RT = 4,05 e a segunda maior taxa, referente à Bahia, foi de aproximadamente duas vezes o esperado (RT = 2,07. Os comerciários foram responsáveis por 96,9% dos dias perdidos por invalidez. CONCLUSÕES: A dor nas costas foi uma importante causa de invalidez em 2007, sobretudo entre comerciários, com grandes diferenças entre os estados.

  9. AdvancED Flex 4

    CERN Document Server

    Tiwari, Shashank; Schulze, Charlie

    2010-01-01

    AdvancED Flex 4 makes advanced Flex 4 concepts and techniques easy. Ajax, RIA, Web 2.0, mashups, mobile applications, the most sophisticated web tools, and the coolest interactive web applications are all covered with practical, visually oriented recipes. * Completely updated for the new tools in Flex 4* Demonstrates how to use Flex 4 to create robust and scalable enterprise-grade Rich Internet Applications.* Teaches you to build high-performance web applications with interactivity that really engages your users.* What you'll learn Practiced beginners and intermediate users of Flex, especially

  10. Producción de proteínas recombinantes de Plasmodium falciparum en Escherichia coli

    Directory of Open Access Journals (Sweden)

    Ángela Patricia Guerra

    2016-04-01

    Conclusión. El uso de cepas de E. coli genéticamente modificadas fue fundamental para alcanzar altos niveles de expresión de las cuatro proteínas recombinantes evaluadas y permitió obtener dos de ellas en forma soluble. La estrategia utilizada permitió expresar cuatro proteínas recombinantes de P. falciparum en cantidad suficiente para inmunizar ratones y producir anticuerpos policlonales y, además, conservar proteína pura y soluble de dos de ellas para ensayos futuros.

  11. Patina????o: uma alternativa nas aulas de educa????o f??sica

    OpenAIRE

    Pardo, Cindya Katerine

    2016-01-01

    Introdu????o: A patina????o ?? sem duvida um fen??meno social. Nas crian??as, apresenta-se como um jogo motivante, de vertigem, produz sensa????es de domina????o do medo de cair e da velocidade. Desenvolve v??rios aspectos psicomotores que s??o trabalhados nas aulas de educa????o f??sica. Objetivo: Apresentar e incluir a patina????o no ambiente escolar, no ensino fundamental, como uma alternativa para as aulas de Educa????o F??sica. Materiais e M??todos: Foi realizada uma revis??o bibliogr??f...

  12. Trabalhadores publicos nas administrações regionais e subprefeituras : uma categoria ameaçada

    OpenAIRE

    João Petrucio Medeiros da Silva

    2005-01-01

    Resumo: A política neoliberal e o processo de racionalização, decorrentes da política de reforma do Estado implementada a partir da década de 90, produziram fortes impactos na organização e nas relações de trabalho no setor público, sobretudo, na categoria dos servidores públicos municipais, em particular, os ajudantes de serviços gerais da Prefeitura Municipal de Campinas que prestam serviços nas Administrações Regionais e Subprefeituras. O processo de privatização, desregulamentação e f...

  13. Lipoproteínas e inflamação na esclerose múltipla

    OpenAIRE

    Cascais, Maria João Coelho Melo

    2010-01-01

    Preâmbulo Os processos inflamatórios induzem alterações marcadas do metabolismo das lipoproteínas plasmáticas e estas, por sua vez, regulam as reacções imunitárias. Dadas as muitas relações existentes entre imunidade inata e adquirida e o metabolismo das lipoproteínas, investigámos neste trabalho a sua possível relevância para a compreensão da Esclerose Múltipla (EM), uma doença neuroinflamatória e neurodegenrativa do Sistema Nervoso Central (SNC). Como será evidente ao l...

  14. Elementos de Análisis Cualitativo y Cuantitativo en Proteínas del Gluten de Trigo

    OpenAIRE

    Díaz Dellavalle, Paola; Dalla Rizza, Marco; Vázquez, Daniel; Castro, Marina

    2006-01-01

    La calidad del trigo para pan (Triticum aestivum L.) depende de la calidad y cantidad de las proteínas del gluten -gluteninas y gliadinas- las cuales constituyen 10 a 14% de las proteínas del grano. Varios parámetros cuantitativos, como el contenido total de proteínas de la harina, el contenido de proteínas poliméricas presentes en el grano y la proporción de gluteninas y gliadinas, están relacionados a la calidad panadera. En este trabajo se presenta la caracterización de las gluteninas de a...

  15. Advances and new functions of VCSEL photonics

    Science.gov (United States)

    Koyama, Fumio

    2014-11-01

    A vertical cavity surface emitting laser (VCSEL) was born in Japan. The 37 years' research and developments opened up various applications including datacom, sensors, optical interconnects, spectroscopy, optical storages, printers, laser displays, laser radar, atomic clock and high power sources. A lot of unique features have been already proven, such as low power consumption, a wafer level testing and so on. The market of VCSELs has been growing up rapidly and they are now key devices in local area networks based on multi-mode optical fibers. Optical interconnections in data centers and supercomputers are attracting much interest. In this paper, the advances on VCSEL photonics will be reviewed. We present the high-speed modulation of VCSELs based on a coupled cavity structure. For further increase in transmission capacity per fiber, the wavelength engineering of VCSEL arrays is discussed, which includes the wavelength stabilization and wavelength tuning based on a micro-machined cantilever structure. We also address a lateral integration platform and new functions, including high-resolution beam scanner, vortex beam creation and large-port free space wavelength selective switch with a Bragg reflector waveguide.

  16. Advanced calculus

    CERN Document Server

    Nickerson, HK; Steenrod, NE

    2011-01-01

    ""This book is a radical departure from all previous concepts of advanced calculus,"" declared the Bulletin of the American Mathematics Society, ""and the nature of this departure merits serious study of the book by everyone interested in undergraduate education in mathematics."" Classroom-tested in a Princeton University honors course, it offers students a unified introduction to advanced calculus. Starting with an abstract treatment of vector spaces and linear transforms, the authors introduce a single basic derivative in an invariant form. All other derivatives - gradient, divergent, curl,

  17. Unmanned Aircraft Systems (UAS) Integration in the National Airspace System (NAS) Project FY16 Annual Review

    Science.gov (United States)

    Grindle, Laurie; Hackenberg, Davis

    2016-01-01

    This presentation gives insight into the research activities and efforts being executed in order to integrate unmanned aircraft systems into the national airspace system. This briefing is to inform others of the UAS-NAS FY16 progress and future directions.

  18. Alergia às Proteínas do Leite de Vaca: Uma Nova Era

    Directory of Open Access Journals (Sweden)

    Filipe Benito Garcia

    2016-01-01

    lhe atualmente uma dieta livre. Esta estratégia terapêutica mostra-se revolucionária por permitir modificar a história natural da alergia às proteínas do leite de vaca grave e persistente, com impacto muito positivo na qualidade de vida dos doentes e da sua família.

  19. 48 CFR 852.236-82 - Payments under fixed-price construction contracts (without NAS).

    Science.gov (United States)

    2010-10-01

    ... manner; or (iv) Failure to comply in good faith with approved subcontracting plans, certifications, or... under other provisions of the contract or in accordance with the general law and regulations regarding... construction contracts (without NAS). 852.236-82 Section 852.236-82 Federal Acquisition Regulations System...

  20. Estabilidad de emulsiones preparadas con proteínas de sueros de soja

    Directory of Open Access Journals (Sweden)

    Jorge Wagner

    2011-12-01

    Full Text Available Por precipitación con acetona en frio, se obtuvieron muestras de proteínas aisladas de dos sueros de soja, el suero SS proveniente de la obtención de aislados de soja y el suero de tofu ST. A partir del SS y del mismo suero previamente liofilizado y calentado (SSLC se obtuvieron las proteínas denominadas PSS y PSSLC, respectivamente; a partir de ST se preparó la muestra PST. El objetivo del trabajo fue analizar la estabilidad de emulsiones o/w preparadas con las proteínas de sueros de soja en forma comparativa con un aislado de soja nativo (ASN. Las emulsiones se prepararon por homogeneización de dispersiones proteicas (0,1–1,0 % p/v en buffer fosfato 10 mM pH 7 y aceite de girasol (Φmásico=0,33, empleando un Ultraturrax T-25. La estabilidad fue evaluada por medida del aceite separado, distribución de tamaño de partículas (por difracción láser y los grados de cremado y coalescencia evaluados a través de perfiles de BackScattering. Se observó que en todas las concentraciones ensayadas las emulsiones preparadas con proteínas aisladas (por precipitación con acetona en frío de suero de tofu tratado térmicamente (PST tenían una estabilidad comparable a la de emulsiones preparadas con ASN. Se halló una estabilidad menor en emulsiones con proteínas nativas de suero de soja (PSS obtenido en laboratorio no tratado térmicamente. Las proteínas obtenidas de este suero liofilizado y calentado (PSSLC exhibieron una mejor capacidad emulsionante. Los resultados mostraron que las proteínas de sueros de soja presentan buenas propiedades emulsionantes y estabilizantes dependientes del grado de desnaturalización y glicosilación alcanzado.

  1. UAS-NAS Integrated Human in the Loop: Test Environment Report

    Science.gov (United States)

    Murphy, Jim; Otto, Neil; Jovic, Srba

    2015-01-01

    The desire and ability to fly Unmanned Aircraft Systems (UAS) in the National Airspace System (NAS) is of increasing urgency. The application of unmanned aircraft to perform national security, defense, scientific, and emergency management are driving the critical need for less restrictive access by UAS to the NAS. UAS represent a new capability that will provide a variety of services in the government (public) and commercial (civil) aviation sectors. The growth of this potential industry has not yet been realized due to the lack of a common understanding of what is required to safely operate UAS in the NAS. NASA's UAS Integration in the NAS Project is conducting research in the areas of Separation Assurance/Sense and Avoid Interoperability (SSI), Human Systems Integration (HSI), and Communication to support reducing the barriers of UAS access to the NAS. This research was broken into two research themes namely, UAS Integration and Test Infrastructure. UAS Integration focuses on airspace integration procedures and performance standards to enable UAS integration in the air transportation system, covering Sense and Avoid (SAA) performance standards, command and control performance standards, and human systems integration. The focus of the Test Infrastructure theme was to enable development and validation of airspace integration procedures and performance standards, including the execution of integrated test and evaluation. In support of the integrated test and evaluation efforts, the Project developed an adaptable, scalable, and schedulable relevant test environment incorporating live, virtual, and constructive elements capable of validating concepts and technologies for unmanned aircraft systems to safely operate in the NAS. To accomplish this task, the Project planned to conduct three integrated events: a Human-in-the-Loop simulation and two Flight Test series that integrated key concepts, technologies and/or procedures in a relevant air traffic environment. Each of

  2. Neonatal Abstinence Syndrome (NAS): Transitioning Methadone Treated Infants From An Inpatient to an Outpatient Setting

    Science.gov (United States)

    Backes, Carl H.; Backes, Carl R.; Gardner, Debra; Nankervis, Craig A.; Giannone, Peter J.; Cordero, Leandro

    2013-01-01

    Background Each year in the US approximately 50,000 neonates receive inpatient pharmacotherapy for the treatment of neonatal abstinence syndrome (NAS). Objective To compare the safety and efficacy of a traditional inpatient only approach with a combined inpatient and outpatient methadone treatment program. Design/Methods Retrospective review (2007-9). Infants were born to mothers maintained on methadone or buprenorphine in an antenatal substance abuse program. All infants received methadone for NAS treatment as inpatient. Methadone weaning for the traditional group (75 pts) was inpatient while the combined group (46 pts) was outpatient. Results Infants in the traditional and combined groups were similar in demographics, obstetrical risk factors, birth weight, GA and the incidence of prematurity (34 & 31%). Hospital stay was shorter in the combined than in the traditional group (13 vs 25d; p < 0.01). Although the duration of treatment was longer for infants in the combined group (37 vs 21d, p<0.01), the cumulative methadone dose was similar (3.6 vs 3.1mg/kg, p 0.42). Follow-up: Information was available for 80% of infants in the traditional and 100% of infants in the combined group. All infants in the combined group were seen ≤ 72 hours from hospital discharge. Breast feeding was more common among infants in the combined group (24 vs. 8% p<0.05). Following discharge there were no differences between the two groups in hospital readmissions for NAS. Prematurity (<37w GA) was the only predictor for hospital readmission for NAS in both groups (p 0.02, OR 5). Average hospital cost for each infant in the combined group was $13,817 less than in the traditional group. Conclusions A combined inpatient and outpatient methadone treatment in the management of NAS decreases hospital stay and substantially reduces cost. Additional studies are needed to evaluate the potential long term benefits of the combined approach on infants and their families. PMID:21852772

  3. Meeting of Experts on NASA's Unmanned Aircraft System (UAS) Integration in the National Airspace Systems (NAS) Project

    Science.gov (United States)

    Wolfe, Jean; Bauer, Jeff; Bixby, C.J.; Lauderdale, Todd; Shively, Jay; Griner, James; Hayhurst, Kelly

    2010-01-01

    Topics discussed include: Aeronautics Research Mission Directorate Integrated Systems Research Program (ISRP) and UAS Integration in the NAS Project; UAS Integration into the NAS Project; Separation Assurance and Collision Avoidance; Pilot Aircraft Interface Objectives/Rationale; Communication; Certification; and Integrated Tests and Evaluations.

  4. Scientific and technical support of the operation and development of nuclear technologies by institutes of the NAS of Ukraine

    International Nuclear Information System (INIS)

    Neklyudov, Yi.M.; Volobujev, O.V.

    2011-01-01

    The significant role of NAS of Ukraine in the development and implementation of innovations in the field of nuclear and radiation technologies and the significant contribution of NAS of Ukraine in the solution of current problems of nuclear and radiation technologies is shown.

  5. Unmanned Aircraft Systems (UAS) Integration in the National Airspace System (NAS) Project - Systems Integration and Operationalization (SIO) Demonstration

    Science.gov (United States)

    Swieringa, Kurt

    2018-01-01

    The UAS-NAS Project hosted a Systems Integration Operationalization (SIO) Industry Day for the SIO Request for Information (RFI) on November 30, 2017 in San Diego, California. This presentation is being presented to the same group as a follow up regarding the progress that the UAS-NAS project has made on the SIO RFI. The presentation will be virtual with a teleconference

  6. An Integrated Gate Turnaround Management Concept Leveraging Big Data Analytics for NAS Performance Improvements

    Science.gov (United States)

    Chung, William W.; Ingram, Carla D.; Ahlquist, Douglas Kurt; Chachad, Girish H.

    2016-01-01

    "Gate Turnaround" plays a key role in the National Air Space (NAS) gate-to-gate performance by receiving aircraft when they reach their destination airport, and delivering aircraft into the NAS upon departing from the gate and subsequent takeoff. The time spent at the gate in meeting the planned departure time is influenced by many factors and often with considerable uncertainties. Uncertainties such as weather, early or late arrivals, disembarking and boarding passengers, unloading/reloading cargo, aircraft logistics/maintenance services and ground handling, traffic in ramp and movement areas for taxi-in and taxi-out, and departure queue management for takeoff are likely encountered on the daily basis. The Integrated Gate Turnaround Management (IGTM) concept is leveraging relevant historical data to support optimization of the gate operations, which include arrival, at the gate, departure based on constraints (e.g., available gates at the arrival, ground crew and equipment for the gate turnaround, and over capacity demand upon departure), and collaborative decision-making. The IGTM concept provides effective information services and decision tools to the stakeholders, such as airline dispatchers, gate agents, airport operators, ramp controllers, and air traffic control (ATC) traffic managers and ground controllers to mitigate uncertainties arising from both nominal and off-nominal airport gate operations. IGTM will provide NAS stakeholders customized decision making tools through a User Interface (UI) by leveraging historical data (Big Data), net-enabled Air Traffic Management (ATM) live data, and analytics according to dependencies among NAS parameters for the stakeholders to manage and optimize the NAS performance in the gate turnaround domain. The application will give stakeholders predictable results based on the past and current NAS performance according to selected decision trees through the UI. The predictable results are generated based on analysis of the

  7. Advanced Virgo

    CERN Multimedia

    Virgo, a first-generation interferometric gravitational wave (GW) detector, located in the European Gravitational Observatory, EGO, Cascina (Pisa-Italy) and constructed by the collaboration of French and Italian institutes (CNRS and INFN) has successfully completed its long-duration data taking runs. It is now undergoing a fundamental upgrade that exploits available cutting edges technology to open an exciting new window on the universe, with the first detection of a gravitational wave signal. Advanced Virgo (AdV) is the project to upgrade the Virgo detector to a second-generation instrument. AdV will be able to scan a volume of the Universe 1000 times larger than initial Virgo. AdV will be hosted in the same infrastructures as Virgo. The Advanced VIRGO project is funded and at present carried on by a larger collaboration of institutes belonging to CNRS- France , RMKI - Hungary, INFN- Italy, Nikhef - The Netherlands Polish Academy of Science - Poland.

  8. Advanced Combustion

    Energy Technology Data Exchange (ETDEWEB)

    Holcomb, Gordon R. [NETL

    2013-03-11

    The activity reported in this presentation is to provide the mechanical and physical property information needed to allow rational design, development and/or choice of alloys, manufacturing approaches, and environmental exposure and component life models to enable oxy-fuel combustion boilers to operate at Ultra-Supercritical (up to 650{degrees}C & between 22-30 MPa) and/or Advanced Ultra-Supercritical conditions (760{degrees}C & 35 MPa).

  9. Ubiquity and diversity of heterotrophic bacterial nasA genes in diverse marine environments.

    Directory of Open Access Journals (Sweden)

    Xuexia Jiang

    Full Text Available Nitrate uptake by heterotrophic bacteria plays an important role in marine N cycling. However, few studies have investigated the diversity of environmental nitrate assimilating bacteria (NAB. In this study, the diversity and biogeographical distribution of NAB in several global oceans and particularly in the western Pacific marginal seas were investigated using both cultivation and culture-independent molecular approaches. Phylogenetic analyses based on 16S rRNA and nasA (encoding the large subunit of the assimilatory nitrate reductase gene sequences indicated that the cultivable NAB in South China Sea belonged to the α-Proteobacteria, γ-Proteobacteria and CFB (Cytophaga-Flavobacteria-Bacteroides bacterial groups. In all the environmental samples of the present study, α-Proteobacteria, γ-Proteobacteria and Bacteroidetes were found to be the dominant nasA-harboring bacteria. Almost all of the α-Proteobacteria OTUs were classified into three Roseobacter-like groups (I to III. Clone library analysis revealed previously underestimated nasA diversity; e.g. the nasA gene sequences affiliated with β-Proteobacteria, ε-Proteobacteria and Lentisphaerae were observed in the field investigation for the first time, to the best of our knowledge. The geographical and vertical distributions of seawater nasA-harboring bacteria indicated that NAB were highly diverse and ubiquitously distributed in the studied marginal seas and world oceans. Niche adaptation and separation and/or limited dispersal might mediate the NAB composition and community structure in different water bodies. In the shallow-water Kueishantao hydrothermal vent environment, chemolithoautotrophic sulfur-oxidizing bacteria were the primary NAB, indicating a unique nitrate-assimilating community in this extreme environment. In the coastal water of the East China Sea, the relative abundance of Alteromonas and Roseobacter-like nasA gene sequences responded closely to algal blooms, indicating

  10. Influência de Marx nas músicas de John Lennon

    Directory of Open Access Journals (Sweden)

    Roseli Coutinho dos Santos Nunes

    2014-11-01

    Full Text Available Apresenta a influência do filósofo Karl Marx nas músicas abertamente políticas Revolution (1968, Working Class Hero (1970 e Power to the People (1971, cuja principal força criativa na composição e gravação foi John Lennon, o beatle mais envolvido com a teoria marxista. Apresenta a influência do modo de pensar marxista nos muitos trabalhos dos Beatles: mudar o modo que as pessoas pensam acerca do mundo para criar um mundo melhor e mais justo e, nas últimas obras, atrair a atenção para a desigualdade entre as classes sociais.

  11. The Reflection of Quantum Aesthetics in Algis Mickūnas Cosmic Philosophy

    Directory of Open Access Journals (Sweden)

    Auridas Gajauskas

    2011-04-01

    Full Text Available Quantum Aesthetics phenomenon was formed in Spain, at the end of the twentieth centure. The paper analyzes this movement in the context of Algis Mickūnas phenomenological cosmic philosophy. Movement initiator is a Spanish novelist Gregorio Morales. The study is divided into two parts: the first part presents aesthetic principles of the quantum, relationship between new aesthetics and theories of quantum mechanics, physics and other sciences. The paper also examines the similarities of quantum aesthetics and New Age movements. The second part presents cosmic - phenomenological reflection of quantum theory of beauty. Mickūnas philosophical position combines theory of "eternal recurrence", "the bodily nature of consciousness", "the cosmic dance", theory of "dynamic fields" and quantum approach to aesthetics and the Universe. Summa Summarum he writes that "the conception of quantum aesthetics is involved in the composition of the rhythmic, cyclical and mood dimensioned and tensed world". 

  12. Como tem se dado a atuação do assistente social nas empresas privadas?

    Directory of Open Access Journals (Sweden)

    Stephania Lani de Lacerda Reis Gavioli de Abreu

    2016-05-01

    Full Text Available O presente artigo tem como objetivo apresentar como vem se dando a atuação do assistente social nas empresas privadas. O estudo foi realizado por meio de uma pesquisa qualitativa, mediante entrevistas com profissionais da área, através de roteiro semi-estruturado, em seis empresas privadas. Como principais resultados observou-se que a atuação do assistente social nas empresas privadas é marcada por diversos antagonismos, no entanto, acredita-se ser possível direcionar seu trabalho para os interesses dos trabalhadores em paralelo aos interesses de lucratividade do capital, realizando-os por meio de estratégias articuladas ao projeto ético-político do Serviço Social.

  13. The temperature dependence of atomic incorporation characteristics in growing GaInNAs films

    International Nuclear Information System (INIS)

    Li, Jingling; Gao, Fangliang; Wen, Lei; Zhou, Shizhong; Zhang, Shuguang; Li, Guoqiang

    2015-01-01

    We have systematically studied the temperature dependence of incorporation characteristics of nitrogen (N) and indium (In) in growing GaInNAs films. With the implementation of Monte-Carlo simulation, the low N adsorption energy (−0.10 eV) is demonstrated. To understand the atomic incorporation mechanism, temperature dependence of interactions between Group-III and V elements are subsequently discussed. We find that the In incorporation behaviors rather than that of N are more sensitive to the T g , which can be experimentally verified by exploring the compositional modulation and structural changes of the GaInNAs films by means of high-resolution X-ray diffraction, X-ray photoelectron spectroscopy, scanning electron microscope, and secondary ion mass spectroscopy

  14. Euphorbiaceae Juss: espécies ocorrentes nas restingas do Estado do Rio de Janeiro, Brasil

    Directory of Open Access Journals (Sweden)

    Arline Souza de Oliveira

    1989-01-01

    Full Text Available O presente trabalho relaciona as espécies da família Euphorbiaceae Juss. encontradas nas restingas do Estado do Rio de Janeiro. As coletas foram realizadas no período de 1983 a 1988 em vários trechos do litoral fluminense, nas diferentes faixas de vegetação. Além da listagem contendo 31 espécies de 16 gêneros, aborda-se também a forma biológica (porte destes taxa, para uma melhor compreensão desta famflia na composição florística das restingas.This work presents a list of the species of the Euphorbiaceae Juss., which are signalled for the sandy coastal plains - restinga - of the Estado do Rio de Janeiro, Brazil. The life-forms of the taxa are registred.

  15. First-principle natural band alignment of GaN / dilute-As GaNAs alloy

    Directory of Open Access Journals (Sweden)

    Chee-Keong Tan

    2015-01-01

    Full Text Available Density functional theory (DFT calculations with the local density approximation (LDA functional are employed to investigate the band alignment of dilute-As GaNAs alloys with respect to the GaN alloy. Conduction and valence band positions of dilute-As GaNAs alloy with respect to the GaN alloy on an absolute energy scale are determined from the combination of bulk and surface DFT calculations. The resulting GaN / GaNAs conduction to valence band offset ratio is found as approximately 5:95. Our theoretical finding is in good agreement with experimental observation, indicating the upward movements of valence band at low-As content dilute-As GaNAs are mainly responsible for the drastic reduction of the GaN energy band gap. In addition, type-I band alignment of GaN / GaNAs is suggested as a reasonable approach for future device implementation with dilute-As GaNAs quantum well, and possible type-II quantum well active region can be formed by using InGaN / dilute-As GaNAs heterostructure.

  16. Identificación de proteínas de cubierta y de membrana en merozoitos de Plasmodium falciparum

    Directory of Open Access Journals (Sweden)

    Enid Rivadeneira

    1987-06-01

    Full Text Available Se identificaron proteínas que se desprenden fácilmente del merozoíto (probablemente proteinas de cubierta y proteínas intrínsecas de la membrana por el fraccionamiento de parásitos marcados endógenamente y purificados. La marcación continua durante todo el ciclo aseguró la identificación de las proteínas independientemente de su tiempo de síntesis. Este método permitió detectar proteínas de membrana, independientemente de su suceptibilidad a la digestión enzimática o a la marcación exógena. Se identificaron 4 proteínas de 100, 75, 50 y 45 KD que probablemente son constituyentes de la cubierta del merozoíto. En la fracción de membranas, solubles en detergente, se detectan 6 proteínas principales de 225, 86, 82, 75, 72 y 40 KD y 4 proteínas menores de 200, 69, 45 y 43 KD. Este trabajo es una contribución a la caracterización de la superficie del merozoíto de Plasmodium falciparum.

  17. Advanced calculus

    CERN Document Server

    Friedman, Avner

    2007-01-01

    This rigorous two-part treatment advances from functions of one variable to those of several variables. Intended for students who have already completed a one-year course in elementary calculus, it defers the introduction of functions of several variables for as long as possible, and adds clarity and simplicity by avoiding a mixture of heuristic and rigorous arguments.The first part explores functions of one variable, including numbers and sequences, continuous functions, differentiable functions, integration, and sequences and series of functions. The second part examines functions of several

  18. Advanced calculus

    CERN Document Server

    Fitzpatrick, Patrick M

    2009-01-01

    Advanced Calculus is intended as a text for courses that furnish the backbone of the student's undergraduate education in mathematical analysis. The goal is to rigorously present the fundamental concepts within the context of illuminating examples and stimulating exercises. This book is self-contained and starts with the creation of basic tools using the completeness axiom. The continuity, differentiability, integrability, and power series representation properties of functions of a single variable are established. The next few chapters describe the topological and metric properties of Euclide

  19. Advanced trigonometry

    CERN Document Server

    Durell, C V; Robson, A

    1950-01-01

    This volume will provide a welcome resource for teachers seeking an undergraduate text on advanced trigonometry, when few are readily available. Ideal for self-study, this text offers a clear, logical presentation of topics and an extensive selection of problems with answers. Contents include the properties of the triangle and the quadrilateral; equations, sub-multiple angles, and inverse functions; hyperbolic, logarithmic, and exponential functions; and expansions in power-series. Further topics encompass the special hyperbolic functions; projection and finite series; complex numbers; de Moiv

  20. Genome-wide identification, classification and expression profiling of nicotianamine synthase (NAS) gene family in maize

    OpenAIRE

    Zhou, Xiaojin; Li, Suzhen; Zhao, Qianqian; Liu, Xiaoqing; Zhang, Shaojun; Sun, Cheng; Fan, Yunliu; Zhang, Chunyi; Chen, Rumei

    2013-01-01

    Background Nicotianamine (NA), a ubiquitous molecule in plants, is an important metal ion chelator and the main precursor for phytosiderophores biosynthesis. Considerable progress has been achieved in cloning and characterizing the functions of nicotianamine synthase (NAS) in plants including barley, Arabidopsis and rice. Maize is not only an important cereal crop, but also a model plant for genetics and evolutionary study. The genome sequencing of maize was completed, and many gene families ...

  1. A regra de ouro e a ética nas organizações

    Directory of Open Access Journals (Sweden)

    Hermano Roberto Thiry-Cherques

    Full Text Available Este artigo examina o princípio da regra de ouro e questiona a sua ampla aplicação nas organizações. O texto resume a trajetória da regra na história do pensamento filosófico e, a partir da crítica de Kant, apresenta argumentos que expõem a sua fragilidade lógica.

  2. Applications Performance Under MPL and MPI on NAS IBM SP2

    Science.gov (United States)

    Saini, Subhash; Simon, Horst D.; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    On July 5, 1994, an IBM Scalable POWER parallel System (IBM SP2) with 64 nodes, was installed at the Numerical Aerodynamic Simulation (NAS) Facility Each node of NAS IBM SP2 is a "wide node" consisting of a RISC 6000/590 workstation module with a clock of 66.5 MHz which can perform four floating point operations per clock with a peak performance of 266 Mflop/s. By the end of 1994, 64 nodes of IBM SP2 will be upgraded to 160 nodes with a peak performance of 42.5 Gflop/s. An overview of the IBM SP2 hardware is presented. The basic understanding of architectural details of RS 6000/590 will help application scientists the porting, optimizing, and tuning of codes from other machines such as the CRAY C90 and the Paragon to the NAS SP2. Optimization techniques such as quad-word loading, effective utilization of two floating point units, and data cache optimization of RS 6000/590 is illustrated, with examples giving performance gains at each optimization step. The conversion of codes using Intel's message passing library NX to codes using native Message Passing Library (MPL) and the Message Passing Interface (NMI) library available on the IBM SP2 is illustrated. In particular, we will present the performance of Fast Fourier Transform (FFT) kernel from NAS Parallel Benchmarks (NPB) under MPL and MPI. We have also optimized some of Fortran BLAS 2 and BLAS 3 routines, e.g., the optimized Fortran DAXPY runs at 175 Mflop/s and optimized Fortran DGEMM runs at 230 Mflop/s per node. The performance of the NPB (Class B) on the IBM SP2 is compared with the CRAY C90, Intel Paragon, TMC CM-5E, and the CRAY T3D.

  3. Functional Requirements Document for HALE UAS Operations in the NAS: Step 1. Version 3

    Science.gov (United States)

    2006-01-01

    The purpose of this Functional Requirements Document (FRD) is to compile the functional requirements needed to achieve the Access 5 Vision of "operating High Altitude, Long Endurance (HALE) Unmanned Aircraft Systems (UAS) routinely, safely, and reliably in the national airspace system (NAS)" for Step 1. These functional requirements could support the development of a minimum set of policies, procedures and standards by the Federal Aviation Administration (FAA) and various standards organizations. It is envisioned that this comprehensive body of work will enable the FAA to establish and approve regulations to govern safe operation of UAS in the NAS on a routine or daily "file and fly" basis. The approach used to derive the functional requirements found within this FRD was to decompose the operational requirements and objectives identified within the Access 5 Concept of Operations (CONOPS) into the functions needed to routinely and safely operate a HALE UAS in the NAS. As a result, four major functional areas evolved to enable routine and safe UAS operations for an on-demand basis in the NAS. These four major functions are: Aviate, Navigate, Communicate, and Avoid Hazards. All of the functional requirements within this document can be directly traceable to one of these four major functions. Some functions, however, are traceable to several, or even all, of these four major functions. These cross-cutting functional requirements support the "Command / Control: function as well as the "Manage Contingencies" function. The requirements associated to these high-level functions and all of their supporting low-level functions are addressed in subsequent sections of this document.

  4. Flujo y concentración de proteínas en saliva total humana

    Directory of Open Access Journals (Sweden)

    BANDERAS-TARABAY JOSÉ ANTONIO

    1997-01-01

    Full Text Available Objetivo. Determinar los promedios de flujo salival y la concentración de proteínas totales en una población joven del Estado de México. Material y métodos. Se seleccionaron 120 sujetos a quienes se les colectó saliva total humana (STH no estimulada y estimulada, la cual se analizó por medio de gravimetría y espectrofotometría (LV/LU; se calcularon medidas de tendencia central y de dispersión; posteriormente, se correlacionaron estos datos con los índices CPOD y CPITN. Resultados. Los sujetos estudiados mostraron un promedio de flujo salival (ml/min ± DE en STH no estimulada de 0.397±.26, y en STH estimulada, de 0.973±.53. El promedio en la concentración de proteínas (mg/ml ± DE fue de 1.374±.45 en STH no estimulada y de 1.526±.44 en STH estimulada. Las mujeres presentaron un menor porcentaje de flujo salival y mayor concentración de proteínas. No se observaron correlaciones entre el flujo y la concentración de proteínas totales y el CPOD y CPITN; sin embargo, sí las hubo con otras variables. Conclusiones. Estos hallazgos podrían estar asociados con el grado de nutrición, las características genéticas y los niveles de salud bucal en nuestra población. El presente estudio representa la fase inicial de la creación de una base de datos en sialoquímica, cuya meta será identificar los parámetros que indiquen el riesgo de enfermedades sistémicas o bucodentales.

  5. Investigations of the Optical Properties of GaNAs Alloys by First-Principle.

    Science.gov (United States)

    Borovac, Damir; Tan, Chee-Keong; Tansu, Nelson

    2017-12-11

    We present a Density Functional Theory (DFT) analysis of the optical properties of dilute-As GaN 1-x As x alloys with arsenic (As) content ranging from 0% up to 12.5%. The real and imaginary parts of the dielectric function are investigated, and the results are compared to experimental and theoretical values for GaN. The analysis extends to present the complex refractive index and the normal-incidence reflectivity. The refractive index difference between GaN and GaNAs alloys can be engineered to be up to ~0.35 in the visible regime by inserting relatively low amounts of As-content into the GaN system. Thus, the analysis elucidates on the birefringence of the dilute-As GaNAs alloys and comparison to other experimentally characterized III-nitride systems is drawn. Our findings indicate the potential of GaNAs alloys for III-nitride based waveguide and photonic circuit design applications.

  6. A participação dos pais nas pesquisas sobre o bullying escolar

    Directory of Open Access Journals (Sweden)

    Juliane Callegaro Borsa

    Full Text Available O bullying é um problema comum na interação de pares e pode implicar em diferentes prejuízos ao longo do desenvolvimento das crianças vítimas e agressoras. Pesquisas recentes indicam alta frequência da prática do bullying nas escolas brasileiras, porém ainda são escassos os estudos que compreendam este fenômeno a partir de uma perspectiva multifatorial. O presente artigo tem por objetivo apresentar o conceito do bullying e mostrar a importância de considerar as variáveis do contexto familiar para sua compreensão. Destaca-se a necessidade de incluir os pais das crianças como participantes nas pesquisas empíricas sobre o bullying escolar e a importância da sua participação tanto na avaliação como na prevenção deste problema. Por fim, discute-se a inclusão dos pais nas estratégias de intervenção diante do bullying, com vista à redução dos fatores de risco presentes no ambiente familiar e dos seus prejuízos para o desenvolvimento socioemocional das crianças.

  7. GESTÃO DO CONHECIMENTO NAS ORGANIZAÇÕES OU DO DESCONHECIMENTO DA REALIDADE ORGANIZACIONAL?

    Directory of Open Access Journals (Sweden)

    Fladimir F. dos Santos

    2005-12-01

    Full Text Available Este artigo discute a validade dos propósitos da gestão do conhecimento como ferramenta de intervenção organizacional, descrevendo alguns paradoxos existentes entre a sua teoria e prática nas organizações. Argumenta-se que seus propósitos originais de criação, difusão e incorporação de um novo conhecimento na organização estão dando lugar a uma abordagem que não condiz com a realidade das organizações. Faz-se também uma leitura da evolução das teorias administrativas para elucidar como esse novo paradigma da administração está sendo abordado nas empresas. Propõe-se que se trata de uma reificação da máxima taylorista, cujos propósitos originais estão se convertendo em mais um instrumento de manipulação humana nas empresas. Ao final, em substituição ao modelo gerencial vigente, é proposto um processo de gestão baseado em valores.

  8. PROPRIEDADES FUNCIONAIS DAS PROTEÍNAS DE AMÊNDOAS DA MUNGUBA (Pachira aquatica Aubl.

    Directory of Open Access Journals (Sweden)

    BERNADETE DE LOURDES DE ARAÚJO SILVA

    2015-03-01

    Full Text Available RESUMO A semente da munguba (Pachira aquatica Aubl. contém amêndoas que exibem um conteúdo excelente de óleo e um percentualsignificativo em proteínas. Propositou-se determinar algumas propriedades funcionais das proteínas de amêndoas damunguba com o objetivo de instituir sua utilização na indústria de alimentos. O teor lipídico foi de 46,62%, o proteico de 13,75% e na forma de torta apresentou um índice de 28,27% de proteínas. Obtiveram-se doisisolados proteicos, o IP 2,0 e o IP 10,0, decorrentes de duas condições de pH (2,0 e 10,0. Na obtenção dos isolados proteicos, os índices em proteínas extraídas foram de 38,52% para o IP 2,0 e 82,06% para o IP 10,0. Os índices de proteínas recuperadas através da precipitação isoelétrica foram de 23,35% para o IP 2,0 e de 70,94%para o IP 10,0, em pH 5,0. As propriedades funcionais exibiram solubilidade mínima em pH 5,0, no pontoisoelétrico (pI, sendo mais elevada em pH ácido e alcalino do pI. As melhores capacidades de absorçãode água e de óleo exibidas foram para o IP 10,0. As propriedades emulsificantes foram dependentes do pH para os dois isolados, e o IP 10,0 indicou melhores resultados. As propriedades funcionais estudadas permitem o emprego dos isolados proteicos em produtos alimentícios que requerem alta solubilidade, tais como os produtos de panificação, massas em geral, sopas desidratadas e molhos, produtos que exigem desempenho na absorção do óleo, como as carnes simuladas, e em produtos que requerem poderes emulsificantes.

  9. Advanced Pacemaker

    Science.gov (United States)

    1990-01-01

    Synchrony, developed by St. Jude Medical's Cardiac Rhythm Management Division (formerly known as Pacesetter Systems, Inc.) is an advanced state-of-the-art implantable pacemaker that closely matches the natural rhythm of the heart. The companion element of the Synchrony Pacemaker System is the Programmer Analyzer APS-II which allows a doctor to reprogram and fine tune the pacemaker to each user's special requirements without surgery. The two-way communications capability that allows the physician to instruct and query the pacemaker is accomplished by bidirectional telemetry. APS-II features 28 pacing functions and thousands of programming combinations to accommodate diverse lifestyles. Microprocessor unit also records and stores pertinent patient data up to a year.

  10. Advances in lung ultrasound.

    Science.gov (United States)

    Francisco, Miguel José; Rahal, Antonio; Vieira, Fabio Augusto Cardillo; Silva, Paulo Savoia Dias da; Funari, Marcelo Buarque de Gusmão

    2016-01-01

    Ultrasound examination of the chest has advanced in recent decades. This imaging modality is currently used to diagnose several pathological conditions and provides qualitative and quantitative information. Acoustic barriers represented by the aerated lungs and the bony framework of the chest generate well-described sonographic artifacts that can be used as diagnostic aids. The normal pleural line and A, B, C, E and Z lines (also known as false B lines) are artifacts with specific characteristics. Lung consolidation and pneumothorax sonographic patterns are also well established. Some scanning protocols have been used in patient management. The Blue, FALLS and C.A.U.S.E. protocols are examples of algorithms using artifact combinations to achieve accurate diagnoses. Combined chest ultrasonography and radiography are often sufficient to diagnose and manage lung and chest wall conditions. Chest ultrasonography is a highly valuable diagnostic tool for radiologists, emergency and intensive care physicians. RESUMO O exame ultrassonográfico do tórax avançou nas últimas décadas, sendo utilizado para o diagnóstico de inúmeras condições patológicas, e fornecendo informações qualitativas e quantitativas. Os pulmões aerados e o arcabouço ósseo do tórax representam barreira sonora para o estudo ultrassonográfico, gerando artefatos que, bem conhecidos, são utilizados como ferramentas diagnósticas. Eco pleural normal, linhas A, linhas B, linhas C, linhas E e Z (conhecidas como falsas linhas B) são artefatos com características peculiares. Os padrões de consolidação e de pneumotórax também são bem estabelecidos. Alguns protocolos têm sido utilizados no manuseio dos pacientes: Blue Protocol, Protocolo FALLS e Protocolo C.A.U.S.E são exemplos de três propostas que, por meio da associação entre os artefatos, permitem sugerir diagnósticos precisos. A ultrassonografia de tórax, aliada à radiografia de tórax, muitas vezes é suficiente para o diagn

  11. Effect of antimony on the deep-level traps in GaInNAsSb thin films

    Energy Technology Data Exchange (ETDEWEB)

    Islam, Muhammad Monirul, E-mail: islam.monir.ke@u.tsukuba.ac.jp; Miyashita, Naoya; Ahsan, Nazmul; Okada, Yoshitaka [Research Center for Advanced Science and Technology (RCAST), The University of Tokyo, 4-6-1 Komaba, Meguro ku, Tokyo 153-8904 (Japan); Sakurai, Takeaki; Akimoto, Katsuhiro [Institute of Applied Physics, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573 (Japan)

    2014-09-15

    Admittance spectroscopy has been performed to investigate the effect of antimony (Sb) on GaInNAs material in relation to the deep-level defects in this material. Two electron traps, E1 and E2 at an energy level 0.12 and 0.41 eV below the conduction band (E{sub C}), respectively, were found in undoped GaInNAs. Bias-voltage dependent admittance confirmed that E1 is an interface-type defect being spatially localized at the GaInNAs/GaAs interface, while E2 is a bulk-type defect located around mid-gap of GaInNAs layer. Introduction of Sb improved the material quality which was evident from the reduction of both the interface and bulk-type defects.

  12. 78 FR 12951 - TRICARE; Elimination of the Non-Availability Statement (NAS) Requirement for Non-Emergency...

    Science.gov (United States)

    2013-02-26

    ... an annual effect of $100 million or more on the national economy or which would have other... maternity services, the ASD(HA) may require an NAS prior to TRICARE cost-sharing for additional services...

  13. [Darius Staliūnas. Making Russians : meaning and practice of russification in Lithuania and Belarus after 1863

    Index Scriptorium Estoniae

    Woodworth, Bradley D., 1963-

    2011-01-01

    Arvustus: Darius Staliūnas. Making Russians. Meaning and practice of russification in Lithuania and Belarus after 1863. On the boundary of two worlds: identity, freedom, and moral imagination in the Baltica, 11. (Amsterdam : Rodopi, 2007)

  14. High-Performance All-Solid-State Na-S Battery Enabled by Casting-Annealing Technology.

    Science.gov (United States)

    Fan, Xiulin; Yue, Jie; Han, Fudong; Chen, Ji; Deng, Tao; Zhou, Xiuquan; Hou, Singyuk; Wang, Chunsheng

    2018-04-24

    Room-temperature all-solid-state Na-S batteries (ASNSBs) using sulfide solid electrolytes are a promising next-generation battery technology due to the high energy, enhanced safety, and earth abundant resources of both sodium and sulfur. Currently, the sulfide electrolyte ASNSBs are fabricated by a simple cold-pressing process leaving with high residential stress. Even worse, the large volume change of S/Na 2 S during charge/discharge cycles induces additional stress, seriously weakening the less-contacted interfaces among the solid electrolyte, active materials, and the electron conductive agent that are formed in the cold-pressing process. The high and continuous increase of the interface resistance hindered its practical application. Herein, we significantly reduce the interface resistance and eliminate the residential stress in Na 2 S cathodes by fabricating Na 2 S-Na 3 PS 4 -CMK-3 nanocomposites using melting-casting followed by stress-release annealing-precipitation process. The casting-annealing process guarantees the close contact between the Na 3 PS 4 solid electrolyte and the CMK-3 mesoporous carbon in mixed ionic/electronic conductive matrix, while the in situ precipitated Na 2 S active species from the solid electrolyte during the annealing process guarantees the interfacial contact among these three subcomponents without residential stress, which greatly reduces the interfacial resistance and enhances the electrochemical performance. The in situ synthesized Na 2 S-Na 3 PS 4 -CMK-3 composite cathode delivers a stable and highly reversible capacity of 810 mAh/g at 50 mA/g for 50 cycles at 60 °C. The present casting-annealing strategy should provide opportunities for the advancement of mechanically robust and high-performance next-generation ASNSBs.

  15. Unmanned Aircraft Systems (UAS) Integration in the National Airspace System (NAS) Project KDP-C Review

    Science.gov (United States)

    Grindle, Laurie; Sakahara, Robert; Hackenberg, Davis; Johnson, William

    2017-01-01

    The topics discussed are the UAS-NAS project life-cycle and ARMD thrust flow down, as well as the UAS environments and how we operate in those environments. NASA's Armstrong Flight Research Center at Edwards, CA, is leading a project designed to help integrate unmanned air vehicles into the world around us. The Unmanned Aircraft Systems Integration in the National Airspace System project, or UAS in the NAS, will contribute capabilities designed to reduce technical barriers related to safety and operational challenges associated with enabling routine UAS access to the NAS. The project falls under the Integrated Systems Research Program office managed at NASA Headquarters by the agency's Aeronautics Research Mission Directorate. NASA's four aeronautics research centers - Armstrong, Ames Research Center, Langley Research Center, and Glenn Research Center - are part of the technology development project. With the use and diversity of unmanned aircraft growing rapidly, new uses for these vehicles are constantly being considered. Unmanned aircraft promise new ways of increasing efficiency, reducing costs, enhancing safety and saving lives 460265main_ED10-0132-16_full.jpg Unmanned aircraft systems such as NASA's Global Hawks (above) and Predator B named Ikhana (below), along with numerous other unmanned aircraft systems large and small, are the prime focus of the UAS in the NAS effort to integrate them into the national airspace. Credits: NASA Photos 710580main_ED07-0243-37_full.jpg The UAS in the NAS project envisions performance-based routine access to all segments of the national airspace for all unmanned aircraft system classes, once all safety-related and technical barriers are overcome. The project will provide critical data to such key stakeholders and customers as the Federal Aviation Administration and RTCA Special Committee 203 (formerly the Radio Technical Commission for Aeronautics) by conducting integrated, relevant system-level tests to adequately address

  16. Japānas populārās kultūras ietekme uz jauniešiem

    OpenAIRE

    Leščenko, Jekaterina

    2016-01-01

    Šī darba nosaukums ir “Japānas populārās kultūras ietekme uz jauniešiem”. Ir zināms, ka pēdējos gados Japānas populāra kultūra iegūst arvien vairāk popularitātes visā pasaulē. Gandrīz katrā valstī var atrast kaut ko, kas ir saistīts ar Japānas populāro kultūru. Mūzika, drāmas, anime un manga ir bieži sastopami un zināmi Japānas populārās kultūras objekti. Tā kā Japānas populārā kultūra ir tik izplatīta, ir jābūt kādam popularitātes iemeslam. Pirmkārt, Japānas populārā kultūra ir pilnīgi atš...

  17. Frequency of neonatal abstinence syndrome (NAS and type of the narcotic substance in neonates born from drug addicted mothers

    Directory of Open Access Journals (Sweden)

    Fatemeh Nayeri

    2015-02-01

    Full Text Available Abstract Background and objective: NAS is a combination of signs and symptoms that due to physical and mental dependency, develops in neonates born from drug addicted mothers. The onset of NAS varies in accordance with the type, amount, frequency and duration of substance used. Because of diverse and unclear pattern of substance abuse in Iranian addicted pregnant mothers in comparison with western countries, this multi-center study has been designed to evaluate NAS in neonates born from drug addicted mothers. Material and method: A cross sectional study was carried out on newborns of narcotic addicted mothers during the first six months of 2008. The newborn’s status and clinical signs were checked by physical examination and scored by the Finnegan scoring system. Results: In this study 100 neonates born from narcotic addicted mothers were examined; the most used narcotic was crack (36%. 60% of neonates showed signs of NAS. The most prevalent signs of NAS were increased muscle tonicity (60%/7, irritability (59%/6 and increased moro reflex (51%/8. Neonates born from crack abusers, in comparison with other drugs, were significantly at risk of NAS (100% vs.87%, p

  18. Mantle Convection on Modern Supercomputers

    Science.gov (United States)

    Weismüller, J.; Gmeiner, B.; Huber, M.; John, L.; Mohr, M.; Rüde, U.; Wohlmuth, B.; Bunge, H. P.

    2015-12-01

    Mantle convection is the cause for plate tectonics, the formation of mountains and oceans, and the main driving mechanism behind earthquakes. The convection process is modeled by a system of partial differential equations describing the conservation of mass, momentum and energy. Characteristic to mantle flow is the vast disparity of length scales from global to microscopic, turning mantle convection simulations into a challenging application for high-performance computing. As system size and technical complexity of the simulations continue to increase, design and implementation of simulation models for next generation large-scale architectures is handled successfully only in an interdisciplinary context. A new priority program - named SPPEXA - by the German Research Foundation (DFG) addresses this issue, and brings together computer scientists, mathematicians and application scientists around grand challenges in HPC. Here we report from the TERRA-NEO project, which is part of the high visibility SPPEXA program, and a joint effort of four research groups. TERRA-NEO develops algorithms for future HPC infrastructures, focusing on high computational efficiency and resilience in next generation mantle convection models. We present software that can resolve the Earth's mantle with up to 1012 grid points and scales efficiently to massively parallel hardware with more than 50,000 processors. We use our simulations to explore the dynamic regime of mantle convection and assess the impact of small scale processes on global mantle flow.

  19. Supercomputer requirements for theoretical chemistry

    International Nuclear Information System (INIS)

    Walker, R.B.; Hay, P.J.; Galbraith, H.W.

    1980-01-01

    Many problems important to the theoretical chemist would, if implemented in their full complexity, strain the capabilities of today's most powerful computers. Several such problems are now being implemented on the CRAY-1 computer at Los Alamos. Examples of these problems are taken from the fields of molecular electronic structure calculations, quantum reactive scattering calculations, and quantum optics. 12 figures

  20. Supercomputer debugging workshop '92

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.S.

    1993-01-01

    This report contains papers or viewgraphs on the following topics: The ABCs of Debugging in the 1990s; Cray Computer Corporation; Thinking Machines Corporation; Cray Research, Incorporated; Sun Microsystems, Inc; Kendall Square Research; The Effects of Register Allocation and Instruction Scheduling on Symbolic Debugging; Debugging Optimized Code: Currency Determination with Data Flow; A Debugging Tool for Parallel and Distributed Programs; Analyzing Traces of Parallel Programs Containing Semaphore Synchronization; Compile-time Support for Efficient Data Race Detection in Shared-Memory Parallel Programs; Direct Manipulation Techniques for Parallel Debuggers; Transparent Observation of XENOOPS Objects; A Parallel Software Monitor for Debugging and Performance Tools on Distributed Memory Multicomputers; Profiling Performance of Inter-Processor Communications in an iWarp Torus; The Application of Code Instrumentation Technology in the Los Alamos Debugger; and CXdb: The Road to Remote Debugging.

  1. [Teacher enhancement at Supercomputing `96

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-02-13

    The SC`96 Education Program provided a three-day professional development experience for middle and high school science, mathematics, and computer technology teachers. The program theme was Computers at Work in the Classroom, and a majority of the sessions were presented by classroom teachers who have had several years experience in using these technologies with their students. The teachers who attended the program were introduced to classroom applications of computing and networking technologies and were provided to the greatest extent possible with lesson plans, sample problems, and other resources that could immediately be used in their own classrooms. The attached At a Glance Schedule and Session Abstracts describes in detail the three-day SC`96 Education Program. Also included is the SC`96 Education Program evaluation report and the financial report.

  2. UAS Integration Into the NAS: An Examination of Baseline Compliance in the Current Airspace System

    Science.gov (United States)

    Fern, Lisa; Kenny, Caitlin A.; Shively, Robert J.; Johnson, Walter

    2012-01-01

    As a result of the FAA Modernization and Reform Act of 2012, Unmanned Aerial Systems (UAS) are expected to be integrated into the National Airspace System (NAS) by 2015. Several human factors challenges need to be addressed before UAS can safely and routinely fly in the NAS with manned aircraft. Perhaps the most significant challenge is for the UAS to be non-disruptive to the air traffic management system. Another human factors challenge is how to provide UAS pilots with intuitive traffic information in order to support situation awareness (SA) of their airspace environment as well as a see-and-avoid capability comparable to manned aircraft so that a UAS pilot could safely maneuver the aircraft to maintain separation and collision avoidance if necessary. A simulation experiment was conducted to examine baseline compliance of UAS operations in the current airspace system. Researchers also examined the effects of introducing a Cockpit Situation Display (CSD) into a UAS Ground Control Station (GCS) on UAS pilot performance, workload and situation awareness while flying in a positively controlled sector. Pilots were tasked with conducting a highway patrol police mission with a Medium Altitude Long Endurance (MALE) UAS in L.A. Center airspace with two mission objectives: 1) to reroute the UAS when issued new instructions from their commander, and 2) to communicate with Air Traffic Control (ATC) to negotiate flight plan changes and respond to vectoring and altitude change instructions. Objective aircraft separation data, workload ratings, SA data, and subjective ratings regarding UAS operations in the NAS were collected. Results indicate that UAS pilots were able to comply appropriately with ATC instructions. In addition, the introduction of the CSD improved pilot SA and reduced workload associated with UAS and ATC interactions.

  3. The CAS-NAS forum for new leaders in space science

    Science.gov (United States)

    Smith, David H.

    The space science community is thoroughly international, with numerous nations now capable of launching scientific payloads into space either independently or in concert with others. As such, it is important for national space-science advisory groups to engage with like-minded groups in other spacefaring nations. The Space Studies Board of the US National Academy of Sciences' (NAS') National Research Council has provided scientific and technical advice to NASA for more than 50 years. Over this period, the Board has developed important multilateral and bilateral partnerships with space scientists around the world. The primary multilateral partner is COSPAR, for which the Board serves as the US national committee. The Board's primary bilateral relationship is with the European Science Foundation’s European Space Science Committee. Burgeoning Chinese space activities have resulted in several attempts in the past decade to open a dialogue between the Board and space scientists in China. On each occasion, the external political environment was not conducive to success. The most recent efforts to engage the Chinese space researchers began in 2011 and have proved particularly successful. Although NASA is currently prohibited from engaging in bilateral activities with China, the Board has established a fruitful dialogue with its counterpart in the Chinese Academy of Sciences (CAS). A joint NAS-CAS activity, the Forum for New Leaders in Space Science, has been established to provide opportunities for a highly select group of young space scientists from China and the United States to discuss their research activities in an intimate and collegial environment at meetings to be held in both nations. The presentation will describe the current state of US-China space relations, discuss the goals of the joint NAS-CAS undertaking and report on the activities at the May, 2014, Forum in Beijing and the planning for the November, 2014, Forum in Irvine, California.

  4. Lipoproteínas remanentes aterogénicas en humanos

    Directory of Open Access Journals (Sweden)

    Regina Wikinski

    2010-08-01

    Full Text Available La lipoproteínas remanentes (RLPs son el producto de la lipólisis de los triglicéridos transportados por las lipoproteínas de baja densidad (VLDL de origen hepático e intestinal y de los quilomicrones intestinales. Dicha lipólisis es catalizada por la lipoproteína lipasa y se produce en pasos sucesivos, de manera que los productos son heterogéneos. Su concentración plasmática en ayunas es pequeña en pacientes normolipémicos y aumenta en el estado post-prandial. Las alteraciones genéticas en subtipos de su componente Apo-E aumentan notablemente su concentración plasmática y producen el fenotipo de disbetalipoproteinemia. Se las considera aterogénicas porque injurian el endotelio, sufren estrés oxidativo, son captadas por los macrófagos en el subendotelio vascular y generan las células espumosas que son precursoras de ateromas. Su origen metabólico, como productos de varios tipos de lipoproteínas, explican su estructura heterogénea, sus concentraciones plasmáticas variables y las dificultades metodológicas que dificultan su inclusión en el perfil lipoproteico como parte de los estudios epidemiológicos. Los últimos avances en los estudios metabólicos y la actualización de su papel clínico, justifican una revisión de los conocimientos actuales.

  5. CLUSIACEAE LINDL. E HYPERICACEAE JUSS. NAS RESTINGAS DO ESTADO DO PARÁ, AMAZÔNIA ORIENTAL, BRASIL

    Directory of Open Access Journals (Sweden)

    Thiago Teixeira de Oliveira

    2015-12-01

    Full Text Available O estudo teve como objetivo o tratamento florístico-taxonômico de Clusiaceae e Hypericaceae para as restingas do Estado do Pará. O material foi obtido nos acervos dos Herbários do Museu Paraense Emílio Goeldi (MG, Embrapa Amazônia Oriental (IAN e coletas realizadas na praia do Crispim, Marapanim-PA. As descrições das espécies foram fundamentadas nas características morfológicas e em suas respectivas variações para a flora, foi elaborada uma chave para identificação das mesmas. As famílias encontram-se representadas por quatro táxons, onde Clusiaceae é composta por Clusia fockeana Miq., C. hoffmannseggiana Schltdl., e C. panapanari (Aubl. Choisy., e Hypericaceae apenas por Vismia guianensis (Aubl. Choisy. C. panapanari apresenta-se restrita à formação de mata de restinga. C. hoffmannseggiana e V. guianensis apresentaram distribuição mais ampla nas restingas paraenses. No levantamento feito na coleção nos herbários, constatou-se que coletas, das famílias nas restingas paraenses, ainda são escassas e o esforço de coletas poderá trazer mais informações sobre período de floração e frutificação, além de um provável incremento de novos registros para a área de estudo. Palavras-chave: Cebola brava, Litoral paraense, Taxonomia. DOI: http://dx.doi.org/10.18561/2179-5746/biotaamazonia.v5n4p15-21

  6. QNAP 1263U Network Attached Storage (NAS)/ Storage Area Network (SAN) Device Users Guide

    Science.gov (United States)

    2016-11-01

    position unless so designated by other authorized documents. Citation of manufacturer’s or trade names does not constitute an official endorsement or...Message Block, and newer standards such as Internet Small Computer Systems Interface. The differences in the protocols also play an important role in...Mapping the Network Drive 4 5.1 Windows 7 4 5.2 Windows 10 6 6. Connecting to the iSCSI on the NAS 6 7. Adding a New IQN to iSCSI ACL 7 8

  7. O Impacto do project finance nas empresas portuguesas no setor têxtil

    OpenAIRE

    Ribeiro, Sónia Patrícia dos Santos

    2012-01-01

    Dissertação para a obtenção do Grau de Mestre em Contabilidade e Finanças Orientador: Mestre Adalmiro Álvaro Malheiro de Castro Andrade Pereira A presente dissertação desenvolvida no âmbito do Mestrado em Contabilidade e Finanças pretende analisar o impacto do Project Finance nas empresas portuguesas no setor têxtil. O Project Finance é uma forma de financiamento de projetos inovadora, muito utilizada nos Estados Unidos e na Europa e que se aplica essencialmente a projetos de grande esc...

  8. A ciência nas utopias de Campanella, Bacon, Comenius, e Glanvill

    Directory of Open Access Journals (Sweden)

    Bernardo Jefferson de Oliveira

    2002-12-01

    Full Text Available Este artigo analisa comparativamente o papel que a ciência e a técnica ocupam nas sociedades descritas em A cidade do Sol de Tommasio Campanella, a Nova Atlântida de Francis Bacon, Panorthosia de Jan Amós Comenius e o Complemento à Nova Atlântida de Joseph Glanvill.This article evaluates the role that science and technology plays in the societies described by early modern utopias, making a comparative analysis of Tommasio Campanella's City of Sun, Francis Bacon's New Atlantis, Jan Amós Comenius' Panorthosia, and Joseph Glanvill's The summe of my lord Bacon's New Atlantis.

  9. Humor nas propagandas televisivas: um olhar qualitativo sobre as percepções dos consumidores

    OpenAIRE

    Silva, Hélcia Daniel da

    2010-01-01

    A propaganda cumpre um papel importante em favor da sociedade, economia e mercado das empresas. Seu valor no marketing é fundamental, considerada a principal ferramenta de exposição ao público, capaz de modificar comportamentos e atitudes, encantando e persuadindo os mesmos com apelos variados e que fazem a diferença no contexto propagandístico. Isto é, o humor é um apelo considerado irreverente e de sucesso quando bem usado nas peças publicitárias. É um recurso que pode facilitar o interesse...

  10. Photoluminescence and magnetophotoluminescence studies in GaInNAs/GaAs quantum wells

    Science.gov (United States)

    Segura, J.; Garro, N.; Cantarero, A.; Miguel-Sánchez, J.; Guzmán, A.; Hierro, A.

    2007-04-01

    We investigate the effects of electron and hole localization in the emission of a GaInNAs/GaAs single quantum well at low temperatures. Photoluminescence measurements varying the excitation density and under magnetic fields up to 14 T have been carried out. The results indicate that electrons are strongly localized in these systems due to small fluctuations in the nitrogen content of the quaternary alloy. The low linear diamagnetic shift of the emission points out the weakness of the Coulomb correlation between electrons and holes and suggests an additional partial localization of the holes.

  11. Arte ou artefato? Agência e significado nas artes indígenas

    Directory of Open Access Journals (Sweden)

    Els Lagrou

    2016-11-01

    Um outro aspecto interessante, que sobressai nas duas contribuições, é a inter-relação entre a Antropologia da arte e Antropologia das coisas ou objetos. Pensar práticas e objetos artísticos sob a perspectiva antropológica significa desvendar relações sociais e intencionalidades neles condensados ou por eles transmitidos – ponto que, coincidentemente, está presente em outras seções desse número da Proa, inclusive na Galeria

  12. A QUESTÃO DA MOBILIDADE URBANA NAS METRÓPOLES BRASILEIRAS

    Directory of Open Access Journals (Sweden)

    Valéria Pero

    2015-12-01

    Full Text Available RESUMO O tempo de deslocamento de casa ao trabalho tem se elevado substancialmente nas regiões metropolitanas brasileiras durante a última década. Esse fenômeno tem implicações fortes sobre o bem-estar dos indivíduos, porém as consequências desse problema não se distribuem uniformemente entre a população. O presente trabalho visa contribuir para o debate sobre a questão da mobilidade urbana nas metrópoles brasileiras analisando a evolução do tempo de deslocamento entre 1992 e 2013 e suas diferenças de acordo com características do trabalhador, como sexo, cor e renda per capita , e do posto de trabalho. Verifica-se que o aumento do tempo médio de deslocamento ocorreu a partir de 2003, caracterizando uma questão particularmente importante para as metrópoles brasileiras no terceiro milênio. Os trabalhadores com maiores tempos médios de deslocamento residem nas regiões metropolitanas do Rio de Janeiro e de São Paulo. Entretanto, as maiores taxas de crescimento ocorreram nas metrópoles do Pará, Salvador e Recife, sugerindo a necessidade de melhor direcionamento e planejamento de políticas públicas na mobilidade urbana. Considerando as diferenças socioeconômicas, destaca-se que os mais pobres e os mais ricos (extremos da distribuição de renda tendem a apresentar tempos de deslocamento menores do que os trabalhadores de famílias de renda média. Esse padrão se mantém ao longo do tempo, com aumento do tempo médio de deslocamento entre os mais pobres, mostrando uma face da desigualdade. Porém, o maior aumento ocorreu entre os mais ricos, colocando a questão da mobilidade urbana para além dos problemas de exclusão social.

  13. Radiation and nuclear technologies in the Institute for Nuclear Research NAS of Ukraine

    International Nuclear Information System (INIS)

    Vishnevs'kij, Yi.M.; Gajdar, G.P.; Kovalenko, O.V.; Kovalyins'ka, T.V.; Kolomyijets', M.F.; Lips'ka, A.Yi.; Litovchenko, P.G.; Sakhno, V.Yi.; Shevel', V.M.

    2014-01-01

    The monograph describes some of the important developments of radiation and nuclear technology, made in INR NAS Ukraine. The first section describes radiation producing new materials and services using electrons with energies up to 5 MeV and Bremsstrahlung X-rays. We describe the original technology using ion emissions of the low and very low energies. In the second section the nuclear technologies, where ions, neutrons and other high-energy particles with energies are used, provide modification of the structure of matter nuclei in particular - radioactive isotopes for industrial and medical supplies and devices based on them.

  14. Independent determination of In and N concentrations in GaInNAs alloys

    International Nuclear Information System (INIS)

    Lu, W; Lim, J J; Bull, S; Andrianov, A V; Larkins, E C; Staddon, C; Foxon, C T; Sadeghi, M; Wang, S M; Larsson, A

    2009-01-01

    High-resolution x-ray diffraction (HRXRD) and photoreflectance (PR) spectroscopy were used to independently determine the In and N concentrations in GaInNAs alloys grown by solid-source molecular beam epitaxy (SSMBE). The lattice constant and bandgap energy can be expressed as two independent equations in terms of the In and N concentrations, respectively. The HRXRD measurement provided the lattice constant and the PR measurement extracted the bandgap energy. By simultaneously solving these two equations, we have determined the In and N concentrations with the error as small as 0.001

  15. O foro competente nas causas matrimoniais segundo a instrução Dignitas Connubii

    Directory of Open Access Journals (Sweden)

    Orsi, João Carlos

    2006-01-01

    Full Text Available A instrução Dignitas Connubii apresenta as normas que devem ser observadas nos tribunais eclesiásticos, dada para ser um vademecum para juízes e ministros dos tribunais, visando dar um tratamento mais veloz e seguro às causas de nulidade matrimonial. Importa observar que a Instrução recolhe todo o material extracodicial referente ao foro competente nas referidas causas matrimoniais. Tal informação é de grande importância para a justiça eclesiástica

  16. Proteínas inmunodominantes de Brucella Melitensis evaluadas por Western Blot

    Directory of Open Access Journals (Sweden)

    Elizabeth Anaya

    1997-01-01

    Full Text Available Se separaron extractos de proteínas totales de Brucella melitensis en gel 15% SDS-PAGE. Su seroreactividad fue analizada por Western Blot con resultados satisfactorios. Para éste propósito sueros controles negativos (n=03, sueros de pacientes con brucelosis (n=34, cólera (n=12, tifoidea (n=02 y tuberculosis (n=02 fueron usados. Esta prueba inmunodiagnóstica detectó bandas seroreactivas altamente específicas (100% correspondientes a 8,14,18, un complejo de 25-48 y 58kDa. La sensibilidad del test fue del 90% usando los sueros antes mencionados.

  17. O controle urbano nas favelas urbanizadas : o caso da Região do ABC

    OpenAIRE

    Nakamura, Milton Susumu

    2014-01-01

    Orientadora: Profa. Dra. Rosana Denaldi Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Planejamento e Gestão do Território, 2014. Este trabalho trata do controle urbano nas favelas urbanizadas, a partir do estudo das experiências realizadas em municípios do ABC. A pesquisa procura analisar os processos de urbanização e regularização de favelas nos municípios de Diadema, Santo André e São Bernardo do Campo e identifica o arcabouço jurídico estabelecido...

  18. A cultura do estupro como método perverso de controle nas sociedades patriarcais

    OpenAIRE

    Andrea Almeida Campos

    2016-01-01

    O presente artigo, ao conceber o crime de estupro como a expressão de uma perversão daqueles que o cometem, sendo o crime tipificado como hediondo no Brasil, tem por escopo responder os porquês de sua tolerância e naturalização, mormente nas sociedades de modelo patriarcal. Essa tolerância não apenas diz respeito a sua impunidade, mas envolve um conjunto de práticas que vigiam, manipulam, censuram o comportamento e dilaceram o corpo da vítima. O artigo sustenta que essas práticas integrariam ...

  19. Materiales biodegradables en base a proteínas de soja y montmorillonitas

    OpenAIRE

    Echeverría, Ignacio

    2012-01-01

    Entre los biomateriales, las proteínas de soja tienen la capacidad de formar películas comestibles y/o biodegradables. Respecto de los polímeros sintéticos, estas películas proteicas presentan excelentes propiedades barrera a gases, lípidos y aromas; pero comúnmente no muestran propiedades mecánicas y barrera al vapor de agua satisfactorias para aplicaciones prácticas. Con el fin de mejorar la funcionalidad de estas películas, en este trabajo se estudió la obtención de materiales na...

  20. Em busca do tempo nas ruas e praças de São Paulo

    OpenAIRE

    Frehse, Fraya

    2016-01-01

    Como será que o cotidiano de nós, pesquisadores urbanos, nas cidades em que vivemos, repercute em nossa trajetória investigativa sobre tais cidades? Argumento neste estudo que há inquietações intelectuais derivadas de nossa condição urbana, por assim dizer. Basta assumir o urbano em termos lefebvrianos; isto é, como referência metodológica para refletir sobre as contradições históricas que impregnam a vida cotidiana na cidade como espaço que simultaneamente favorece e dificulta, de modo privi...

  1. High-Power 1180-nm GaInNAs DBR Laser Diodes

    DEFF Research Database (Denmark)

    Aho, Antti T.; Viheriala, Jukka; Korpijarvi, Ville-Markus

    2017-01-01

    We report high-power 1180-nm GaInNAs distributed Bragg reflector laser diodes with and without a tapered amplifying section. The untapered and tapered components reached room temperature output powers of 655 mW and 4.04 W, respectively. The diodes exhibited narrow linewidth emission with side...... and better carrier confinement compared with traditional GaInAs quantum wells. The development opens new opportunities for the power scaling of frequency-doubled lasers with emission at yellow-orange wavelengths....

  2. LAS PROTEÍNAS DESORDENADAS Y SU FUNCIÓN: UNA NUEVA FORMA DE VER LA ESTRUCTURA DE LAS PROTEÍNAS Y LA RESPUESTA DE LAS PLANTAS AL ESTRÉS

    Directory of Open Access Journals (Sweden)

    César Luis Cuevas-Velázquez

    2011-01-01

    Full Text Available El dogma que relaciona la función de una proteína con una estructura tridimensional definida ha sido desafiado durante los últimos años por el descubrimiento y caracterización de las proteínas conocidas como proteínas no estructuradas o desordenadas. Estas proteínas poseen una elevada flexibilidad estructural la cual les permite adoptar estructuras diferentes y, por tanto, reconocer ligandos diversos conservando la especificidad en el reconocimiento de los mismos. A las proteínas de este tipo, altamente hidrofílicas y que se acumulan ante condiciones de déficit hídrico (sequía, salinidad, congelamiento se les ha denominado hidrofilinas. En plantas, las hidrofilinas mejor caracterizadas son las proteínas LEA (del inglés Late Embryogenesis Abundant que se acumulan abundantemente en la semilla seca y en tejidos vegetativos cuando las plantas se exponen a condiciones de limitación de agua. Evidencia reciente ha demostrado que las proteínas LEA se requieren para que las plantas toleren y se adapten a condiciones de baja disponibilidad de agua. Esta revisión describe los datos más relevantes que asocian las características fisicoquímicas de estas proteínas con su flexibilidad estructural y cómo se afecta ésta por las condiciones ambientales; así como, aquéllos relacionados con sus posibles funciones en la célula vegetal ante situaciones de limitación de agua.

  3. Development of an advanced fluid-dynamic analysis code: α-flow

    International Nuclear Information System (INIS)

    Akiyama, Mamoru

    1990-01-01

    A Project for development of large scale three-dimensional fluid-dynamic analysis code, α-FLOW, coping with the recent advancement of supercomputers and workstations, has been in progress. This project is called the α-Project, which has been promoted by the Association for Large Scale Fluid Dynamics Analysis Code comprising private companies and research institutions such as universities. The developmental period for the α-FLOW is four years, March 1989 to March 1992. To date, the major portions of basic design and program preparation have been completed and the project is in the stage of testing each module. In this paper, the present status of the α-Project, design policy and outline of α-FLOW are described. (author)

  4. UAS Integration in the NAS Project: Integrated Test and Evaluation (IT&E) Flight Test 3. Revision E

    Science.gov (United States)

    Marston, Michael

    2015-01-01

    The desire and ability to fly Unmanned Aircraft Systems (UAS) in the National Airspace System (NAS) is of increasing urgency. The application of unmanned aircraft to perform national security, defense, scientific, and emergency management are driving the critical need for less restrictive access by UAS to the NAS. UAS represent a new capability that will provide a variety of services in the government (public) and commercial (civil) aviation sectors. The growth of this potential industry has not yet been realized due to the lack of a common understanding of what is required to safely operate UAS in the NAS. NASA's UAS Integration into the NAS Project is conducting research in the areas of Separation Assurance/Sense and Avoid Interoperability, Human Systems Integration (HSI), and Communication to support reducing the barriers of UAS access to the NAS. This research is broken into two research themes namely, UAS Integration and Test Infrastructure. UAS Integration focuses on airspace integration procedures and performance standards to enable UAS integration in the air transportation system, covering Sense and Avoid (SAA) performance standards, command and control performance standards, and human systems integration. The focus of Test Infrastructure is to enable development and validation of airspace integration procedures and performance standards, including the integrated test and evaluation. In support of the integrated test and evaluation efforts, the Project will develop an adaptable, scalable, and schedulable relevant test environment capable of evaluating concepts and technologies for unmanned aircraft systems to safely operate in the NAS. To accomplish this task, the Project will conduct a series of Human-in-the-Loop and Flight Test activities that integrate key concepts, technologies and/or procedures in a relevant air traffic environment. Each of the integrated events will build on the technical achievements, fidelity and complexity of the previous tests and

  5. Comportamento proativo nas organizações: o efeito dos valores pessoais

    Directory of Open Access Journals (Sweden)

    Meiry Kamia

    Full Text Available O comportamento proativo é definido como um conjunto de comportamentos extrapapel em que o trabalhador busca espontaneamente mudanças no seu ambiente de trabalho, soluciona e antecipa-se aos problemas, visando a metas de longo prazo que beneficiam a organização. Este estudo teve por objetivo investigar a relação entre os valores pessoais e o comportamento proativo nas organizações. Foram utilizados como instrumentos de medida o Personal Values Questionnaire e a Escala de Comportamento Proativo nas Organizações, ambos já validados para o Brasil. Após a eliminação dos casos extremos, a amostra ficou constituída por 325 funcionários de diferentes organizações. A análise de regressão linear revelou que os valores predizem significativamente os comportamentos proativos, apontando uma relação positiva do tipo motivacional estimulação (B= 0,205, p<0,01 e universalismo/benevolência (B=0,302, p<0,01 com proatividade, e negativa com o tipo motivacional tradição (B= -0,189, p<0,01, de acordo com o previsto pelo referencial teórico. As implicações para os estudos na área são discutidas.

  6. Uma outra ideia da Índia. As literaturas nas línguas Bhashas

    Directory of Open Access Journals (Sweden)

    Cielo Griselda Festino

    2013-04-01

    Full Text Available http://dx.doi.org/10.5007/2175-7968.2013v1n31p103 O objetivo deste artigo é discutir as narrativas indianas nas línguas bhashas, as línguas vernáculas do subcontinente indiano a través de uma política e poética da tradução que da voz e visibilidade a culturas que, de outro modo, estariam restringidas às diversas culturas onde são produzidas. Dessa maneira, não só as literaturas do “front yard”, ou seja as narrativas indianas escritas na língua inglesa da diáspora ganham visibilidade, mas também as narrativas escritas do “backyard” nas línguas vernáculas  da Índia. Nesse processo, o termo “vernáculo” ganha um novo significado no sentido que o que é realmente “vernacularizado” é a língua  inglesa porque se torna um veículo por meio do que as literaturas bhasha tornam-se conhecidas. Para ilustrar esse processo o artigo traz uma análise do conto “Thayyaal” escrito na língua Tamil, do sul da Índia.

  7. EFICIÊNCIA DO USO DE SISTEMAS ESPECIALISTAS NAS ÁREAS DA SAÚDE

    Directory of Open Access Journals (Sweden)

    Gabriel Oliveira Tomedi

    2017-02-01

    Full Text Available O Sistema Especialista é uma das técnicas da Inteligência Artificial voltada para o auxílio de profissionais de determinado domínio. Em outras palavras, são definidos como programas computacionais que buscam resolver problemas de um determinado campo do conhecimento da mesma maneira que um especialista. Diante disto, o objetivo do estudo visa realizar um levantamento bibliográfico sobre a eficácia do emprego desses sistemas nas áreas da saúde. Para isto foram levantados estudos dos últimos doze anos, nas bases de dados SciELO, Google Acadêmico. Pode-se observar, com o presente estudo de revisão, foram relatados uma maior precisão para os diagnósticos, menor tempo de atendimento, melhoria no desempenho do profissional e um fácil acesso a informações. Com base nestes relatos, pode-se concluir que a utilização de sistemas Especialistas é efetivo, pois melhorou diversos aspectos nessas áreas.

  8. JK E A REINVENÇÃO DO COTIDIANO NAS NARRATIVAS JORNALÍSTICAS BRASILEIRAS

    Directory of Open Access Journals (Sweden)

    Renato de Almeida Vieira e Silva

    2014-03-01

    Full Text Available Qual a importância dos discursos para a construção da imagem presidencial nas narrativas jornalísticas, num determinado contexto histórico, sendo essa construção de sentidos capaz inclusive de ressignificar o cotidiano de um país , ativar o imaginário e transcender àquele período de governo tornando-se mitológica até para os presidentes que vieram em sucessão? Esse trabalho se propõe a analisar essas hipóteses de produção simbólica e de sentidos encontradas nos discursos do presidente JK, publicados em algumas das principais revistas brasileiras entre 1956 e 1960, representadas por O Cruzeiro e Manchete, contemporizando com algumas citações veiculadas nas revistas Época, Veja e Isto É, em fases mais recentes. Para esse intento serão utilizados conceitos de autores como Bourdieu, Barthes, Orlandi, Heller, Motta, Eliade e Girardet.

  9. Péptidos bioactivos en proteínas de reserva

    Directory of Open Access Journals (Sweden)

    Millán, F.

    2000-10-01

    Full Text Available A review on the bioactive peptides described so far in storage proteins, mainly milk proteins, has been carried out. Bioactive peptides are small amino acid sequences inactives in the native protein, but that can be liberated after hydrolysis of these proteins and exert different functions. Among the main one are bioactive peptides with opioid activity, antagonistic opioid, immunomodulatory, antithrombotic, ion transporting or antihypertensive. The possible presence of these peptides in other protein source, mainly oilseed plants and their possible use is discussed.Se ha realizado una revisión de los péptidos bioactivos descr itos hasta e l momento en proteínas de reserva, principalmente de la leche. Los péptidos bioactivos son pequeñas secuencias aminoacidícas inactivas dentro de la proteína, pero que pueden ser liberados tras la hidrólisis de estas proteínas y ejercer diversas funciones. Entre los más abundantes destacan los péptidos con actividad opioide, opioide antagonista, antitrombótica, inmunomoduladora, transportadora de iones o hipotensora. Se discute la posible presencia de estos péptidos en otras fuentes proteicas, principalmente plantas oleaginosas y su posible aprovechamiento.

  10. O ensino de literatura brasileira nas escolas: uma ferramenta para a mudança social

    Directory of Open Access Journals (Sweden)

    Gustavo Zambrano

    2015-08-01

    Full Text Available Esse trabalho tem como objetivo realizar um detalhamento aprofundado das tensões externas que comprometem a educação no Brasil. Essas tensões dizem respeito ao golpe militar que inibiu melhorias no setor educacional, ao estudo exclusivo nas escolas de obras consideradas canônicas e ao estudo conteudista de literatura exigido pelos exames vestibulares. Esse é um debate importante, pois o ensino atual de literatura nas escolas impede a formação de estudantes capazes de analisar e interpretar um texto literário e de compreender o contexto histórico e sociológico de um país. Iremos, portanto, detalhar esses problemas e apresentaremos como o ensino de literatura brasileira pode ser importante para a percepção por parte dos alunos dos problemas sociais e consequentemente criar estudantes críticos. Palavras-Chave: mudança social, cânone, vestibular, ensino de literatura brasileira

  11. O ensino e a experiência nas narrativas de professores de Inglês

    Directory of Open Access Journals (Sweden)

    Annallena de Souza Guedes

    2016-09-01

    Full Text Available Resumo: Este trabalho tem como objetivo analisar três narrativas de professores de Inglês em exercício, através das quais são reveladas experiências que dizem respeito ao processo de ensinar Inglês em contextos de instituições públicas no Brasil. Pautado no conceito de experiência (MICCOLI, 2010, buscamos compreender quais experiências emergem dessas narrativas e como elas influenciam na prática de ensino dos professores. Ademais, intentamos analisar como os professores de Inglês se veem e quais desafios ele enfrentam nos seus contextos de trabalho. Os resultados desse estudo mostraram que, apesar de todas as experiências de dificuldades e indisciplina reveladas nas narrativas, duas delas parecem ainda encontrar motivação e esperança quanto à sua profissão. Além disso, percebemos que o modo como os professores veem sua realidade, seus alunos e seu trabalho são importantes para caracterizar sua prática profissional. Assim, acreditamos que o contexto e as experiências retratados nas narrativas podem ser caminhos que direcionem as ações desses professores em sala de aula e, consequentemente, possibilitem reflexões e mudanças no seu papel como educador. 

  12. Os sentidos de compreensão nas teorias de Weber e Habermas

    Directory of Open Access Journals (Sweden)

    José Geraldo A. B. Poker

    2013-01-01

    Full Text Available Partindo do pressuposto de que a teoria social elaborada por Habermas em muito se assemelha àquela construída por M. Weber, procedeu-se a um estudo comparativo com a intenção de identificar as formas pelas quais Weber e Habermas elaboraram o conceito de compreensão, ao mesmo tempo em que e o elegeram, cada um a seu modo, como instrumento metodológico adequado às dificuldades da produção de conhecimento científico nas Ciências Sociais. Tanto para Weber, como para Habermas, o conhecimento nas Ciências Sociais não consegue escapar das influências diretas da subjetividade do cientista, como também não é capaz de se proteger das contingências histórico-culturais aos quais inevitavelmente toda ação humana está vinculada. Por isso, fundamentados em suas próprias razões, tanto Weber quanto Habermas apontam a compreensão como a forma possível de conhecimento, o que implica a renúncia às pretensões explicativas e à produção de teorias gerais de fundamentação última, que são típicas das ciências convencionais.

  13. OLICIES AND PRACTICES FOR IMPLEMENTATION OF IFRS AND NAS IN THE REPUBLIC OF MOLDOVA

    Directory of Open Access Journals (Sweden)

    Lica\tERHAN

    2015-06-01

    Full Text Available This study aims to analyse the process of harmonization of national accounting standards of the Republic of Moldova to the international standards. It highlights the main advantages, disadvantages, risks and opportunities regarding the implementation of the new standards. A major step for the Republic of Moldova was the implementation of IFRS, which has become mandatory for all public interest entities from 1 January 2012 and the adoption of new NAS in accordance with EU Directives and IFRS for small and medium-sized entities, for which the transition to IFRS was difficult due to high costs involved. The new NAS came into force on 1 January 2014 as a recommendation, but starting with 1st January 2015 it will be mandatory for all entities. The paper includes a practical analysis of the impact of transition to IFRS on the financial results of a public interest entity- Moldova Agroindbank, which is the largest commercial bank, with the highest market share in the banking sector of the Republic of Moldova. A result of the analysis of primary and secondary indicators calculated on the base of the financial statements prepared by commercial bank at 31.12.11, we found that the transition to IFRS has resulted in the growth of all financial indicators.

  14. Tetovēšanas tradīcija Ķīnas vēsturē

    OpenAIRE

    Zolotarjova, Anastasija

    2012-01-01

    Šī bakalaura darba nosaukums ir „Tetovēšanas tradīcija Ķīnas vēsturē”. Tetovēšanas tradīcijai ir sena vēsture. Ķīnas teritoriju apdzīvojošās tautas jau kopš senatnes izmantoja tetovēšanu dažādiem nolūkiem – kā aizsargu no ļaunajiem gariem, kā dekoru, kā soda veidu, kā atpazīšanas zīmi un zvēresta nodošanai. Izpētītas arī mūsdienu Ķīnas mazākumtautību tetovēšanas tradīcijas. Darba mērķis ir seno Ķīnas tautu un mūsdienu Ķīnas mazākumtautību tradīciju un tetovēšanas iemeslu izpēte, kā arī i...

  15. First-principles study on structure stabilities of α-S and Na-S battery systems

    Science.gov (United States)

    Momida, Hiroyoshi; Oguchi, Tamio

    2014-03-01

    To understand microscopic mechanisms of charge and discharge reactions in Na-S batteries, there has been increasing needs to study fundamental atomic and electronic structures of elemental S as well as that of Na-S phases. The most stable form of S is known to be an orthorhombic α-S crystal at ambient temperature and pressure, and α-S consists of puckered S8 rings which crystallize in space group Fddd . In this study, the crystal structure of α-S is examined by using first-principles calculations with and without the van der Waals interaction corrections of Grimme's method, and results clearly show that the van der Waals interactions between the S8 rings have crucial roles on cohesion of α-S. We also study structure stabilities of Na2S, NaS, NaS2, and Na2S5 phases with reported crystal structures. Using calculated total energies of the crystal structure models, we estimate discharge voltages assuming discharge reactions from 2Na+ xS -->Na2Sx, and discharge reactions in Na/S battery systems are discussed by comparing with experimental results. This work was partially supported by Elements Strategy Initiative for Catalysts and Batteries (ESICB) of Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan.

  16. Alteração nas frações das proteínas miofibrilares e maciez do músculo Longissimus de bovinos no período post mortem

    OpenAIRE

    Santos,Gilmara Bruschi; Ramos,Paulo Roberto Rodrigues; Spim,Jeison Solano

    2014-01-01

    Objetivou-se com o estudo identificar por eletroforese as mudanças nas frações das proteínas musculares durante período postmortem de bovinos de diferentes grupos genéticos e analisar a maciez da carne em amostras resfriadas por 24 horas (não maturadas) e maturadas por 7 dias. Foram utilizadas amostras do musculo Longissimus de quarenta e oito bovinos pertencentes a 4 grupos genéticos: 12 Nelore; 12 cruzados ½ Nelore ½ Aberdeen-Angus x Brahman; 12 Brangus; 12 cruzados ½ nelore ½ Aberdeen-Angu...

  17. A PERCEPÇÃO DA GESTÃO DO CONHECIMENTO NAS EMPRESAS EXPORTADORAS DA AMREC

    Directory of Open Access Journals (Sweden)

    Julio Cesar Zilli

    2014-06-01

    Full Text Available Com a globalização e a era da tecnologia as empresas estão cada vez mais se utilizando do capital intelectual, que trata diretamente do conhecimento e habilidades exercitadas pelos seus colaboradores, para atuar nas atividades relacionadas ao mercado interno ou internacional. Diante do exposto, o presente estudo tem como objetivo identificar a percepção dos gestores de comércio exterior perante a Gestão do Conhecimento (GC nas empresas exportadoras da Associação dos Municípios da Região Carbonífera (AMREC. Quanto aos fins à pesquisa enquadrou-se como descritiva e quanto aos meios de investigação foi classificada como bibliográfica e de campo. A amostra foi composta por 10 empresas exportadoras que mantiveram relacionamento comercial com o mercado externo no período de janeiro a dezembro de 2012. Para a coleta de dados utilizou-se um questionário com abordagem quantitativa para conhecer a percepção dos gestores de comercio exterior em relação à identificação, criação, armazenagem, compartilhamento e utilização da Gestão do Conhecimento. Percebe-se uma sinergia desfavorável por parte dos gestores e da organização ao acompanhamento e implantação da prática de GC. Algumas barreiras como motivação e compartilhamento, relações interpessoais, apoio da estrutura e cultura organizacional estão presentes nas empresas. Para o desenvolvimento das atividades voltadas ao mercado internacional essas barreiras devem ser trabalhadas em conjunto, resultando no emprego benéfico das cinco dimensões da GC: identificação, criação, armazenagem, compartilhamento e utilização.

  18. Serious Gaming for Test & Evaluation of Clean-Slate (Ab Initio) National Airspace System (NAS) Designs

    Science.gov (United States)

    Allen, B. Danette; Alexandrov, Natalia

    2016-01-01

    Incremental approaches to air transportation system development inherit current architectural constraints, which, in turn, place hard bounds on system capacity, efficiency of performance, and complexity. To enable airspace operations of the future, a clean-slate (ab initio) airspace design(s) must be considered. This ab initio National Airspace System (NAS) must be capable of accommodating increased traffic density, a broader diversity of aircraft, and on-demand mobility. System and subsystem designs should scale to accommodate the inevitable demand for airspace services that include large numbers of autonomous Unmanned Aerial Vehicles and a paradigm shift in general aviation (e.g., personal air vehicles) in addition to more traditional aerial vehicles such as commercial jetliners and weather balloons. The complex and adaptive nature of ab initio designs for the future NAS requires new approaches to validation, adding a significant physical experimentation component to analytical and simulation tools. In addition to software modeling and simulation, the ability to exercise system solutions in a flight environment will be an essential aspect of validation. The NASA Langley Research Center (LaRC) Autonomy Incubator seeks to develop a flight simulation infrastructure for ab initio modeling and simulation that assumes no specific NAS architecture and models vehicle-to-vehicle behavior to examine interactions and emergent behaviors among hundreds of intelligent aerial agents exhibiting collaborative, cooperative, coordinative, selfish, and malicious behaviors. The air transportation system of the future will be a complex adaptive system (CAS) characterized by complex and sometimes unpredictable (or unpredicted) behaviors that result from temporal and spatial interactions among large numbers of participants. A CAS not only evolves with a changing environment and adapts to it, it is closely coupled to all systems that constitute the environment. Thus, the ecosystem that

  19. Emulsiones alimentarias aceite-en-agua estabilizadas con proteínas de atún

    Directory of Open Access Journals (Sweden)

    Ruiz-Márquez, D.

    2010-12-01

    Full Text Available This work is focused on the development of o/w salad dressing-type emulsions stabilized by tuna proteins. The influence of protein conservation methods after the extraction process (freezing or liofilization on the rheological properties and microstructure of these emulsions was analyzed. Processing variables during emulsification were also evaluated. Stable emulsions with adequate rheological and microstructural characteristics were prepared using 70% oil and 0.50% tuna proteins. From the experimental results obtained, we may conclude that emulsion rheological properties are not significantly affected by the protein conservation method selected. On the contrary, an increase in homogenization speed favours an increase in the values of the linear viscoelastic functions. Less significant is the fact that as agitation speed increases further, mean droplet size steadily decreases.

    El presente trabajo se ha centrado en el desarrollo de emulsiones alimentarias aceite-en-agua estabilizadas con proteínas de atún. Específicamente, se ha analizado la influencia del método de conservación de las proteínas aisladas (liofilización, congelación y de las condiciones de procesado seleccionadas sobre el comportamiento reológico y la microestructura de dichas emulsiones. Se han preparado emulsiones aceite en agua (con un contenido del 70% en peso de aceite estabilizadas con proteínas de atún. La concentración de emulsionante usada ha sido 0,50% en peso. El comportamiento reológico de estas emulsiones no depende significativamente del método de conservación de la proteína empleado. Por otra parte, un aumento de la velocidad de agitación durante el proceso de manufactura de la emulsión da lugar a una disminución continua del tamaño medio de gota y a un aumento de las funciones viscoelásticas dinámicas, menos significativo a medida que aumenta dicha velocidad de agitación.

  20. Toracotomia minimamente invasiva nas intervenções cirúrgicas valvares

    Directory of Open Access Journals (Sweden)

    PEREIRA Marcelo Balestro

    1998-01-01

    Full Text Available Introdução: é tema atual a realização de procedimentos cirúrgicos por minitoracotomias que, inicialmente utilizadas para operações de revascularização do miocárdio, têm sido também propostas como acesso às operações valvares. O objetivo deste trabalho é analisar resultados da minitoracotomia em relação à técnica tradicional nas intervenções valvares, em estudo prospectivo. Casuística e métodos: entre novembro de 1996 e fevereiro de 1998, dois grupos, 8 pacientes operados por minitoracotomia (Grupo 1 e 8 controles (Grupo 2 equiparáveis nas variáveis sexo, idade, peso/altura, classe funcional pré-operatória, doença de base e operação proposta, foram submetidos a reparo ou troca valvar aórtica ou mitral. Os pacientes do Grupo 1 foram operados através de incisão paraesternal direita de até 8 cm, com circulação extracorpórea (CEC estabelecida através de canulação arterial e venosa femorais e os do Grupo 2 (controles por esternotomia mediana. Ambos os grupos foram acompanhados até a alta hospitalar. Resultados: Os parâmetros avaliados no trans-operatório e pós-operatório, bem como a análise estatística constam nas Tabelas 1 e 2. Não ocorreram óbitos imediatos. Duas complicações foram registradas: um infarto per-operatório e um acidente vascular cerebral no Grupo 2. Conclusão: os resultados parciais permitem inferir que a abordagem através de pequenas toracotomias é factível sem aumento na morbimortalidade, do tempo cirúrgico ou da estadia hospitalar. Possíveis vantagens objetivas de um método em relação a outro, exceto o aspecto estético, não estão evidentes até esta etapa do estudo.

  1. Studies of quantum levels in GaInNAs single quantum wells

    International Nuclear Information System (INIS)

    Shirakata, Sho; Kondow, Masahiko; Kitatani, Takeshi

    2006-01-01

    Spectroscopic studies have been carried out on the quantum levels in GaInNAs/GaAs single quantum wells (SQWs). Photoluminescence (PL), PL excitation (PLE), photoreflectance (PR), and high-density-excited PL (HDE-PL) were measured on high quality GaInNAs SQWs, Ga 0.65 In 0.35 N 0.01 As 0.99 /GaAs (well thickness: l z =10 nm) and Ga 0.65 In 0.35 N 0.005 As 0.995 /GaAs (l z =3∝10 nm), grown by molecular-beam epitaxy. For Ga 0.65 In 0.35 N 0.01 As 0.99 /GaAs (l z =10 nm), PL at 8 K exhibited a peak at 1.07 eV due to the exciton-related transition between quantum levels of ground states (e1-hh1). Both PR and PLE exhibited three transitions (1.17, 1.20 and 1.32 eV), and the former two transitions were assigned to as either of e1-lh1 and e2-hh2 transitions, while the transition at 1.32 eV was assigned to as the e2-lh2 transition. For HDE-PL, a new PL peak was observed at about 1.2 eV, and it was assigned to the unresolved e1-lh1 and e2-hh2 transitions. Similar optical measurements have been done on the Ga 0.65 In 0.35 N 0.005 As 0.995 /GaAs with various l z (3∝10 nm). Dependence of optical spectra and energies of quantum levels on l z have been studied. It has been found that HDE-PL in combination with PLE is a good tool for the study of the quantum level of GaInNAs SQW. (copyright 2006 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (Abstract Copyright [2006], Wiley Periodicals, Inc.)

  2. Benefits of a Unified LaSRS++ Simulation for NAS-Wide and High-Fidelity Modeling

    Science.gov (United States)

    Glaab, Patricia; Madden, Michael

    2014-01-01

    The LaSRS++ high-fidelity vehicle simulation was extended in 2012 to support a NAS-wide simulation mode. Since the initial proof-of-concept, the LaSRS++ NAS-wide simulation is maturing into a research-ready tool. A primary benefit of this new capability is the consolidation of the two modeling paradigms under a single framework to save cost, facilitate iterative concept testing between the two tools, and to promote communication and model sharing between user communities at Langley. Specific benefits of each type of modeling are discussed along with the expected benefits of the unified framework. Current capability details of the LaSRS++ NAS-wide simulations are provided, including the visualization tool, live data interface, trajectory generators, terminal routing for arrivals and departures, maneuvering, re-routing, navigation, winds, and turbulence. The plan for future development is also described.

  3. Influencia del pH en la estabilidad de emulsiones elaboradas con proteínas de salvado de arroz

    Directory of Open Access Journals (Sweden)

    Laura Maldonado

    2011-12-01

    Full Text Available Si bien las proteínas de origen animal en muchas instancias pueden tener mejores características funcionales que las proteínas de origen vegetal, el incremento de su costo puede favorecer al uso expansivo de las fitoproteínas como reemplazo. Una de las fuentes de proteínas de origen vegetal es el salvado de arroz, que se obtiene como subproducto en el proceso de pulido del arroz integral (Oryza santiva L para producir el arroz blanco. Se estudió los procesos de cremado, floculación y coalescencia de emulsiones preparadas con proteínas del salvado de arroz a pH 6,0 y 8,0. La obtención de las proteínas del salvado de arroz se realizó en un medio alcalino, partiendo de salvado de arroz desengrasado. El proceso de desestabilización de las emulsiones se analizó a partir de los datos obtenidos por el método de retrodispersión de luz mediante un equipo Turbiscan 2000; en el caso del cremado los datos fueron ajustados a una cinética bifásica con una componente de segundo orden (hiperbólica y otra con un comportamiento sigmoidal. Las emulsiones preparadas a pH 8 presentaron una mayor estabilidad frente al cremado, mientras que los procesos de floculación y coalescencia no fueron influenciados por los distintos valores de pH.

  4. Las proteínas alergénicas: un novedoso blanco para el desarrollo de estudios en proteomica funcional

    Directory of Open Access Journals (Sweden)

    Elkin Navarro

    2008-01-01

    Full Text Available En la patogénesis de las enfermedades alérgicas están involucrados el ambiente, la carga genética y la inmunocompetencia del individuo. Continuamente nuestro sistema inmune está expuesto a numerosas proteínas, sin embargo, solo unas pocas inducen una respuesta inmune alérgica. El potencial intrínsico de una proteína alergénica para inducir sensibilización solo se manifiesta en individuos susceptibles, genéticamente condicionados a presentar respuestas atópicas. Muchas de estas proteínas alergénicas comparten alguna homología en su secuencia de aminoácidos. Estos alérgenos poseen un amplio rango de características moleculares, ninguna de las cuales es única solo para estas proteínas alergénicas. A pesar de esto, algunas de estas características son más comunes entre algunos alérgenos que con otras proteínas. Se ha demostrado que algunas proteínas con actividad enzimática inducen reacciones alérgicas y que esta propiedad biológica está asociada con su actividad catalítica. En la presente revisión se describen las principales características moleculares de las proteínas alergénicas, y se hace énfasis en la cistein proteasas de los ácaros intradomiciliarios, en razón a que ellas son un factor de riesgo en el desarrollo de una respuesta inmune alérgica en individuos susceptibles y se constituyen en factores desencadenantes de respuestas inflamatorias en la fisiopatología de las enfermedades alérgicas respiratorias.

  5. Consumidoras e heroínas: gênero na telenovela

    Directory of Open Access Journals (Sweden)

    Heloisa Buarque de Almeida

    2007-01-01

    Full Text Available http://dx.doi.org/10.1590/S0104-026X2007000100011 Este trabalho explora as correlações entre telenovela, consumo e gênero, buscando compreender como a mídia está articulada à promoção de bens e da cultura do consumo, e como gênero é um eixo importante em tal articulação. A pesquisa foi feita a partir de um estudo etnográfico de recepção de novelas, e se desdobra na análise da relação entre televisão e publicidade, discutindo a feminilização do consumo e a construção de certa imagem feminina hegemônica nas novelas e nos anúncios comerciais.

  6. Nuevo método para medir un inductor de proteínas recombinantes

    OpenAIRE

    Fernández Castañé, Alfred

    2012-01-01

    El IPTG es un azúcar sintético análogo de la lactosa que se utiliza para la inducción de la producción de proteínas en la bacteria Escherichia coli. En la presente tesis doctoral se ha desarrollado por primera vez un método analítico para medir el IPTG en el medio de cultivo y dentro de las células que ha permitido estudiar el comportamiento de inducción en cultivos de alta densidad celular así como optimizar su dosis para obtener la máxima cantidad de producto.

  7. Adherencia de levaduras de Paracoccidioides brasiliensis a proteínas de matriz extracelular: resultados preliminares

    Directory of Open Access Journals (Sweden)

    Luz Elena Cano

    2003-01-01

    Full Text Available La adhesión de los microorganismos a células del hospedero o a proteínas de matriz extracelular (PMEC, representa el primer paso
    para establecer un proceso infeccioso (1. Así, se ha determinado que
    propágulas de hongos de importancia clínica se unen a diferentes
    PMEC. Recientemente se ha demostrado que conidias y micelios de
    Paracoccidioides brasiliensis se unen a PMEC tales como laminina,
    fibrinógeno y fibronectina (2. Hasta el momento no se conoce cuál
    es la interacción entre levaduras de P. brasiliensis y PMEC.

  8. Security Risk Assessment Process for UAS in the NAS CNPC Architecture

    Science.gov (United States)

    Iannicca, Dennis Christopher; Young, Daniel Paul; Suresh, Thadhani; Winter, Gilbert A.

    2013-01-01

    This informational paper discusses the risk assessment process conducted to analyze Control and Non-Payload Communications (CNPC) architectures for integrating civil Unmanned Aircraft Systems (UAS) into the National Airspace System (NAS). The assessment employs the National Institute of Standards and Technology (NIST) Risk Management framework to identify threats, vulnerabilities, and risks to these architectures and recommends corresponding mitigating security controls. This process builds upon earlier work performed by RTCA Special Committee (SC) 203 and the Federal Aviation Administration (FAA) to roadmap the risk assessment methodology and to identify categories of information security risks that pose a significant impact to aeronautical communications systems. A description of the deviations from the typical process is described in regards to this aeronautical communications system. Due to the sensitive nature of the information, data resulting from the risk assessment pertaining to threats, vulnerabilities, and risks is beyond the scope of this paper

  9. Conflitos e contradições nas raízes dos movimentos sociais rurais brasileiros

    Directory of Open Access Journals (Sweden)

    Emerson Dias

    2003-12-01

    Full Text Available A história dos movimentos sociais do campo no Brasil é repleta de ações e reações, de iniciativas organizadas e instintivas, fluxo e refluxo. Este artigo tem como objetivo mapear, até o presente momento, alguns "caminhos históricos" que nortearam os primeiros homens - fossem eles índios, negros, imigrantes ou proletários brancos -a se mobilizarem contra a opressão do capital, do latifúndio e de um Estado que sempre atuou como tutor de uma realidade agrária engessada. A mesma realidade cujas engrenagens enferrujadas vêm sendo movidas à força nas últimas décadas, arrastadas por mobilizações comunitárias que representam um caminho alternativo para a democratização da terra.

  10. O impacto da logística inversa e verde nas organizações

    OpenAIRE

    Arnaldo, Lídia Tanislá

    2018-01-01

    Dissertação/Trabalho de Projeto/Relatório de Estágio submetida como requisito parcial para obtenção do grau de Mestre em Ciências Empresariais- Ramo Gestão Logística Quando se trata do meio ambiente, é importante referir que as empresas têm responsabilidade de gerir os resíduos, de forma a serem bem tratados ou destruídos. O presente trabalho tem como foco a logística inversa, em particular a logística inversa na saúde, cujo objetivo é estudar o impacto da logística inversa e verde nas emp...

  11. Temperature coefficients for GaInP/GaAs/GaInNAsSb solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Aho, Arto; Isoaho, Riku; Tukiainen, Antti; Polojärvi, Ville; Aho, Timo; Raappana, Marianna; Guina, Mircea [Optoelectronics Research Centre, Tampere University of Technology, P.O. Box 692, FIN-33101 Tampere (Finland)

    2015-09-28

    We report the temperature coefficients for MBE-grown GaInP/GaAs/GaInNAsSb multijunction solar cells and the corresponding single junction sub-cells. Temperature-dependent current-voltage measurements were carried out using a solar simulator equipped with a 1000 W Xenon lamp and a three-band AM1.5D simulator. The triple-junction cell exhibited an efficiency of 31% at AM1.5G illumination and an efficiency of 37–39% at 70x real sun concentration. The external quantum efficiency was also measured at different temperatures. The temperature coefficients up to 80°C, for the open circuit voltage, the short circuit current density, and the conversion efficiency were determined to be −7.5 mV/°C, 0.040 mA/cm{sup 2}/°C, and −0.09%/°C, respectively.

  12. A Gestão Colaborativa da Marca nas Redes Sociais Virtuais

    Directory of Open Access Journals (Sweden)

    Clóvis Reis

    2010-03-01

    Full Text Available A partir do conceito de aprendizagem colaborativa ou cooperativa (processo educacional baseado no trabalho em conjunto, no compartilhamento de informações e na interdependência dos membros do grupo, o presente artigo discute a proposta de gestão colaborativa da marca nas redes sociais virtuais. Na gestão colaborativa, a estratégia de comunicação das empresas evolui do modelo de um a muitos para o modelo de muitos a muitos, numa linha de ação horizontal e em via de mão dupla. DOI: 10.5585/remark.v8i2.2133

  13. A Liderança Emocional nas Organizações

    OpenAIRE

    Antonholi, Aparecida Iembo

    2013-01-01

     RESUMO A liderança, por muitos anos, foi estudada como traços de personalidade e  como a capacidade de influenciar pessoas. Este artigo aborda conceitos tradicionais e enfatiza as novas estratégias para a prática da liderança nas organizações, usando a inteligência emocional como ferramenta para aumentar a capacidade de gerenciamento das emoções e sentimentos. A competência social, de autoconsciência, de autogestão e de administração de relacionamentos são competências da inteligência emocio...

  14. Capacitance spectroscopy on n-type GaNAs/GaAs embedded quantum structure solar cells

    Science.gov (United States)

    Venter, Danielle; Bollmann, Joachim; Elborg, Martin; Botha, J. R.; Venter, André

    2018-04-01

    In this study, both deep level transient spectroscopy (DLTS) and admittance spectroscopy (AS) have been used to study the properties of electrically active deep level centers present in GaNAs/GaAs quantum wells (QWs) embedded in p-i-n solar cells. The structures were grown by molecular beam epitaxy (MBE). In particular, the electrical properties of samples with Si (n-type) doping of the QWs were investigated. DLTS revealed four deep level centers in the material, whereas only three were detected by AS. NextNano++ simulation software was used to model the sample band-diagrams to provide reasoning for the origin of the signals produced by both techniques.

  15. Programa Mais Educação: impactos e perspectivas nas escolas do campo

    Directory of Open Access Journals (Sweden)

    Cláudia da Mota Darós Parente

    2017-08-01

    Full Text Available This study aims to analyze the impacts of the “Mais Educação” Program in Brazilian countryside schools, with reflections on the limits and possibilities of the program and full-time education. Information was collected through electronic questionnaires sent to public schools participating in the “Mais Educação” Program. The research considered different aspects: expanding the school day; record of full-time enrollments in the school census; provision of human, educational and financial resources; changes in available spaces; provision of educational, cultural, artistic and sports activities; improvement in the communication process with the community; providing continuing education; changes in the political-pedagogical project and the school curriculum; changes in student behavior; improvement in school performance; improvement in the quality of school meals; development of partnerships; use of other available spaces. Through a quantitative and qualitative analysis, we identified significant impacts of the program in the countryside schools, especially with regard to the expansion of educational opportunities. However, the achieved benefits occur among the historical problems present in the countryside schools that were not overcome by virtue of the “Mais Educação” Program format and depend on the consideration of local governments (states, municipalities and Federal District. It presents reflections on the limits and possibilities of the “Mais Educação” Program and full-time education in the Brazilian countryside schools. O presente trabalho tem como objetivo analisar os impactos do Programa Mais Educação nas escolas do campo brasileiras, apresentando reflexões sobre limites e possibilidades do programa e da educação em tempo integral. Os dados foram coletados por meio de questionários eletrônicos enviados às escolas públicas participantes do Programa Mais Educação. A pesquisa considerou diferentes aspectos

  16. A busca e o uso da informação nas organizações

    Directory of Open Access Journals (Sweden)

    Waleska Silveira Lira

    Full Text Available Este estudo tem como objetivo mostrar a importância do uso eficiente da informação e do conhecimento nas organizações. Analisa as diferentes abordagens da busca e uso da informação, tendo por referencial os princípios associados à gestão da informação e do conhecimento. Conclui que o diferencial é detectar e gerenciar a informação eficaz, através do processo de busca, seleção, análise, disseminação e transformação dessa informação em conhecimento, com o objetivo de obter um melhor posicionamento no espaço competitivo no qual atua.

  17. Valores humanos nas organiza??es: rela??o com a s?ndrome de Burnout e o engajamento laboral

    OpenAIRE

    Coelho, Gabriel Lins de Holanda

    2014-01-01

    Por muitas d?cadas as pesquisas nas organiza??es focaram nos aspectos negativos ocasionados nos trabalhadores, tendo a s?ndrome de burnout como principal expoente. Nos ?ltimos anos, com a expans?o da Psicologia Positiva, o interesse pelos aspectos positivos aumentou e resultou no estudo do engajamento laboral, considerado a ant?tese do burnout e essencial para a maximiza??o do material humano nas organiza??es. Contudo, para a implementa??o de um ambiente que seja prop?cio estimular o engajame...

  18. Empolgação com Copa freia protestos nas redes sociais no Brasil e no mundo

    OpenAIRE

    Zarko, Raphael

    2014-01-01

    Depois da Copa das Confederações de 2013 serem marcadas pelas manifestações contrárias à realização e gastos excessivos para o Mundial do Brasil – tanto nas ruas quanto na internet -, a impressão de que houve uma queda nos protestos da Copa do Mundo do Brasil se confirma com uma ampla pesquisa nas redes sociais. Em monitoramento de mais de 11 milhões de mensagens de Twitter no Brasil e no mundo, o número de menções a protestos é de apenas 17 mil – percentualmente, significa dizer que apenas 0...

  19. Vector para la coexpresión de varias proteínas heterólogas en cantidades equimolares

    OpenAIRE

    Daròs Arnau, José Antonio; Bedoya, Leonor; Martínez, Fernando

    2010-01-01

    La invención se refiere a un vector de expresión basado en la secuencia nucleotídica del genoma de un Potyvirus, preferiblemente del virus del grabado del tabaco, que alberga una secuencia nucleotídica que codifica para al menos una proteína heteróloga, preferiblemente para dos y más preferiblemente para tres proteínas heterólogas. Las proteínas heterólogas se expresan, en la célula transfectada con este vector, como parte de la poliproteína viral y se encuentran flanq...

  20. Vector para la coexpresión de varias proteínas heterólogas en cantidades equimolares

    OpenAIRE

    Daròs Arnau, José Antonio; Bedoya, Leonor; Martínez, Fernando

    2010-01-01

    [ES] La invención se refiere a un vector de expresión basado en la secuencia nucleotídica del genoma de un Potyvirus, preferiblemente del virus del grabado del tabaco, que alberga una secuencia nucleotídica que codifica para al menos una proteína heteróloga, preferiblemente para dos y más preferiblemente para tres proteínas heterologas. Las proteínas heterologas se expresan, en la célula transfectada con este vector, como parte de la poliproteína viral y se encuentran flanqueadas por...

  1. Entre gueixas e samurais: a imigração japonesa nas revistas ilustradas (1897-1945)

    OpenAIRE

    Marcia Yumi Takeuchi

    2009-01-01

    Esta pesquisa tem como objetivo analisar os debates em torno da imigração japonesa nas revistas ilustradas brasileiras publicadas nas cidades de São Paulo e Rio de Janeiro, e na documentação diplomática, tendo em vista a difusão do antiniponismo na sociedade brasileira entre os anos 1897 e 1945. Buscarei comprovar, a partir da análise das charges e caricaturas publicadas nessas revistas, sejam de cunho literário ou irreverente (cômico), que a iconografia exerceu papel fundamental na construçã...

  2. Evidencia de la eficacia de la suplementación con proteínas en el rendimiento deportivo

    OpenAIRE

    Carrascal Quemada, César

    2014-01-01

    La suplementación con proteínas resulta una pieza clave en el deporte. Se ha demotrado que tanto en deportes de fuerza como en resistencia supone una ayuda ergogénica eficaz que ayuda a mejorar la fuerza y la velocidad y acorta los tiempos de recuperación. Además se ha evidenciado que es necesario respetar una serie de tiempos (timing) para tomar la suplementación que dependerán del deporte y sus objetivos. Las proteínas tienen efecto sinérgicos con otros productos entre los que destacan los ...

  3. EXTRAÇÃO E QUANTIFICAÇÃO DAS CLOROFILAS A E B NAS FOLHAS DA XANTHOSOMA SAGITTIFOLIUM

    OpenAIRE

    Gabriela Coelho Couceiro; Yara Barbosa Bustamante; Janicy Arantes Carvalho; Diego Pachelli Teixeira; Patrícia Marcondes dos Santos; Milton Beltrame Junior; Andreza Ribeiro Simioni

    2017-01-01

    A planta Xanthosoma sagittifolium (taioba) é uma hortaliça que pode suprir muitas necessidades, sendo uma fonte de proteínas, cálcio, ferro, vitamina C e outros nutrientes. As clorofilas são os pigmentos mais abundantes nas plantas e possuem vários benefícios à saúde. Sendo assim, foi analisada a presença das clorofilas na espécie Xanthosoma sagittifolium devido ao seu papel na alimentação e seus benefícios à saúde. A concentração das clorofilas a e b foram determinadas por espectrofotometria...

  4. Padrões de refluxo nas veias safenas em homens com insuficiência venosa crônica

    Directory of Open Access Journals (Sweden)

    Carlos Alberto Engelhorn

    Full Text Available Resumo Contexto A insuficiência venosa crônica (IVCr é frequente e predomina nas mulheres, mas ainda há poucas informações sobre o refluxo nas veias safenas na população masculina. Objetivos Identificar os diferentes padrões de refluxo nas veias safenas magnas (VSMs e parvas (VSPs em homens, correlacionando esses dados com a apresentação clínica conforme a classificação Clínica, Etiológica, Anatômica e Fisiopatológica (CEAP. Métodos Foram avaliados 369 membros inferiores de 207 homens pela ultrassonografia vascular (UV com diagnóstico clínico de IVCr primária. As variáveis analisadas foram a classificação CEAP, o padrão de refluxo nas VSMs e VSPs e a correlação entre os dois. Resultados Nos 369 membros avaliados, 72,9% das VSMs apresentaram refluxo com predominância do padrão segmentar (33,8%. Nas VSPs, 16% dos membros inferiores analisados apresentaram refluxo, sendo o mais frequente o padrão distal (33,9%. Dos membros classificados como C4, C5 e C6, 100% apresentaram refluxo na VSM com predominância do refluxo proximal (25,64%, e 38,46% apresentaram refluxo na VSP com equivalência entre os padrões distal e proximal (33,3%. Refluxo na junção safeno-femoral (JSF foi detectado em 7,1% dos membros nas classes C0 e C1, 35,6% nas classes C2 e C3, e 64,1% nas classes C4 a C6. Conclusões O padrão de refluxo segmentar é predominante na VSM, e o padrão de refluxo distal é predominante na VSP. A ocorrência de refluxo na JSF é maior em pacientes com IVCr mais avançada.

  5. Detecção de proteínas imunorreativas de Rickettsia sp. cepa Mata Atlântica

    Directory of Open Access Journals (Sweden)

    Caroline S. Oliveira

    Full Text Available RESUMO: A Febre Maculosa Brasileira (FMB é uma doença infecciosa, transmitida por carrapatos ao homem. Uma nova riquetsiose humana foi descrita como causadora de Febre Maculosa no Estado de São Paulo, sendo denominada de Rickettsia sp. cepa Mata Atlântica. O presente trabalho teve como objetivo detectar e identificar proteínas com potencial de estimular o sistema imune de hospedeiro mamífero, desta nova cepa descrita. Para tanto, foi realizado a extração proteica total de Rickettsia sp. cepa Mata Atlântica. As proteínas extraídas foram fracionadas por eletroforese. As bandas proteicas foram transferidas para membranas de nitrocelulose por migração elétrica e submetidas à técnica de Western-blot, para detecção proteica. Ao todo sete proteínas imunorreativas foram detectadas. Duas proteínas apresentaram maior abundancia, com peso molecular, de 200 e 130 kDa respectivamente. Através da comparação de mapas proteômicos existentes e pelo peso molecular que estas proteínas apresentaram, sugere-se que as duas proteínas detectadas representem rOmpA (200 kDa e rOmpB (130 kDa. As demais proteínas detectadas apresentaram menor ocorrência e peso molecular inferior a 78 kDa, podendo representar membros da família de antígenos de superfície celular (Sca - Surface cell antigen. As proteínas detectadas poderão servir como base de estudo na elaboração de métodos diagnósticos sensíveis e específicos, no desenvolvimento de vacinas, além de possibilitarem novos estudos para terapias mais eficazes.

  6. LEITURA: QUADRO CONCEITUAL DA PRÁXIS NAS ORGANIZAÇÕES QUE INOVAM

    Directory of Open Access Journals (Sweden)

    Valdecir Pereira Uved

    2014-12-01

    Full Text Available A escrita dividiu a história da humanidade e revolucionou as formas de produção e transmissão de conhecimentos. Neste contexto, este ensaio conceitual exploratório tem por objetivo analisar a leitura como elemento e práxis já utilizada no dia a dia das organizações inovadoras. Essa atividade, praticamente invisível na rotina organizacional, é entendida como possibilidade de diferenciação para as empresas se tornarem espaços inovadores. O pressuposto de estudo considerou o ambiente das organizações inovadoras. A opção por analisar a leitura e seu processo para o desenvolvimento humano e organizacional apoia-se na carência de estudos que aproximem a leitura como uma prática para a inovação nas organizações. Para isso, inicialmente foi feita a revisão do conceito de inovação, e à sua luz foi analisada, numa perspectiva filosófica, a leitura e seu processo para o desenvolvimento humano e organizacional, em vista da inovação. Foi possível assegurar que, a leitura é uma das práticas que, presente nas organizações, contribui para a inovação. Como oportunidade de pesquisas futuras, ressaltou-se a realização de trabalho exploratório empiricamente alicerçado na análise dos chamados atores individual, coletivo e contextual.

  7. A salinomicina para o controle da eimeriose de caprinos leiteiros nas fases de cria e recria

    Directory of Open Access Journals (Sweden)

    Vieira Luiz da Silva

    2004-01-01

    Full Text Available A salinomicina foi avaliada no controle da eimeriose caprina em 27 cabritos mestiços distribuídos aleatoriamente em três tratamentos num delineamento inteiramente casualizado: T0, não medicados (grupo controle; T1 e T2, medicados com doses de 1 e 2mg de salinomicina/kg de peso vivo/dia, respectivamente. Na fase de cria, não houve diferença estatística (P>0,005 no ganho médio de peso (GP entre os três tratamentos. Na fase de recria, o grupo T0 apresentou GP significativamente inferior (P0,005. O número médio de oocistos por grama de fezes (OOPG do grupo T0, nas duas fases estudadas, foi significativamente maior (P0,005 entre si. O grupo T0 apresentou rendimento médio de carcaça (RC significativamente inferior (P0,005. O peso médio da massa corporal (MC do grupo T0 foi inferior (P0,005 do grupo T1; entre os grupos T1 e T2, não houve diferença significativa (P>0,005 no peso médio da massa corporal (MC. O uso da salinomicina nas doses de 1 e 2mg/kg, resultou em maior ganho de peso dos animais dos grupos T1 e T2 e, consequentemente, maior valor da margem bruta destes tratamentos. Os resultados obtidos mostraram que os T1 e T2 foram equivalentes para o controle da eimeriose caprina, uma vez que ambos os tratamentos apresentaram maior ganho de peso e oocistograma inferior ao grupo T0. Conclui-se que o tratamento com a salinomicina na dose de 1,0mg/kg é eficaz, desde que seja administrada a partir da segunda semana de vida.

  8. Reflexões sobre os juramentos utilizados nas faculdades médicas do Brasil

    Directory of Open Access Journals (Sweden)

    Almir Galvão Vieira Bitencourt

    Full Text Available OBJETIVO: Avaliar os juramentos médicos (JM utilizados nas faculdades de Medicina do Brasil. MÉTODOS: Estudo descritivo, incluindo as escolas médicas brasileiras com turmas formadas até 2004. Foi enviado um questionário aos responsáveis por cada faculdade, em três diferentes tentativas de contato, por correio convencional, telefone, fax e correio eletrônico. RESULTADOS: De um total de 96 faculdades, 48 (51,1% responderam ao questionário, sendo 25 destas (52,1% públicas. Todas as escolas utilizavam algum tipo de juramento durante o curso. Quanto ao texto utilizado: 44 faculdades (91,7% utilizavam o Juramento de Hipócrates (JH, trechos ou modificações deste. Trinta e oito respostas (79,2% continham o juramento na íntegra, que foram analisados quanto ao conteúdo. Os temas mais citados foram os princípios da beneficência e não-maleficência (94,7% e o segredo médico (97,4%. Somente um juramento citava autonomia dos pacientes, e nenhum o princípio da justiça. CONCLUSÕES: A utilização de JM é amplamente difundida nas escolas médicas brasileiras, mas, ao contrário do que é visto em outros países, os textos utilizados ainda são baseados, em sua grande maioria, no JH e não abordam temas atuais importantes para a ética médica e bioética.

  9. As professoras da sala comum e seus dizeres: Atendimento educacional especializado nas salas de recursos multifuncionais

    Directory of Open Access Journals (Sweden)

    Andréia Heiderscheidt Fuck

    2015-05-01

    Full Text Available http://dx.doi.org/10.5902/1984686X16093Este trabalho apresenta os dizeres das professoras da Sala Comum sobre o Atendimento Educacional Especializado (AEE nas Salas de Recursos Multifuncionais (SRM. Tem como objetivo principal investigar o que as professoras sabem e esperam deste atendimento no contexto escolar. Para a construção dos dados utilizou-se a aplicação de questionários a 144 docentes do 1º ao 5º ano da rede municipal de Joinville, os dados foram analisados por meio da análise de conteúdo de Franco (2012. Os resultados indicam que as professores da Sala Comum sabem que o AEE nas SRM destina-se aos alunos público-alvo da Educação Especial e tem como um dos objetivos disponibilizar recursos e adaptações, porém, esperam que este possa eliminar as necessidades/dificuldades dos estudantes para desenvolver a aprendizagem. Concluiu-se, que as questões apontadas pelas professoras pesquisadas, mostram-se coerentes com as funções do professor especializado, desde que se entenda que esta é uma responsabilidade que perpassa toda a organização escolar, envolvendo os profissionais inseridos neste processo. Evidenciou-se, ainda a insuficiência de troca/parceria entre as professoras da SRM e da Sala Comum, assim como a necessidade de maiores informações e formações no contexto escolar que tenham como foco o trabalho docente numa perspectiva colaborativa.  

  10. Trajetórias escolares prolongadas nas camadas populares Extended academic life for lower class people

    Directory of Open Access Journals (Sweden)

    Débora Cristina Piotto

    2008-12-01

    Full Text Available O objetivo deste estudo é discutir a contribuição de estudos recentes para a compreensão das condições que possibilitam trajetórias escolares prolongadas nas camadas populares. A análise detida de seus resultados e uma entrevista com um aluno de curso superior de alta seletividade proveniente dessas camadas permite problematizar alguns pontos. A revisão de alguns trabalhos sobre o tema, evidenciando a centralidade da família em percursos escolares alongados, possibilita também questionar alguns significados comumente atribuídos à longevidade escolar nas camadas populares como conformismo, sofrimento e ruptura cultural. Evidenciou a necessidade de mais investigação sobre o papel da escola bem como sobre a existência de outros sentidos no acesso e na experiência de estudantes pobres no ensino superior.The purpose of this study is to discuss the contribution of recent studies for understanding the conditions that make extended learning paths possible in low-income segments. The meticulous analysis of its results and an interview with a student of a highly selective higher education course from this background enables us to raise some issues. The review of some studies on the subject evidencing the central role of the family in extended learning paths also enables us to question some of the meanings commonly assigned to extended learning in low-income segments, such as conformism, suffering, and cultural disruption. The need for further investigation on the role of school, as well as on the existence of other meanings related to school access and the experience of poor students in higher education courses has been made evident.

  11. Scientific Discovery through Advanced Computing in Plasma Science

    Science.gov (United States)

    Tang, William

    2005-03-01

    Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research during the 21st Century. For example, the Department of Energy's ``Scientific Discovery through Advanced Computing'' (SciDAC) Program was motivated in large measure by the fact that formidable scientific challenges in its research portfolio could best be addressed by utilizing the combination of the rapid advances in super-computing technology together with the emergence of effective new algorithms and computational methodologies. The imperative is to translate such progress into corresponding increases in the performance of the scientific codes used to model complex physical systems such as those encountered in high temperature plasma research. If properly validated against experimental measurements and analytic benchmarks, these codes can provide reliable predictive capability for the behavior of a broad range of complex natural and engineered systems. This talk reviews recent progress and future directions for advanced simulations with some illustrative examples taken from the plasma science applications area. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by the combination of access to powerful new computational resources together with innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning a huge range in time and space scales. In particular, the plasma science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations

  12. Using a polarizing film in the manufacture of panoramic Stokes polarimeters at the Main Astronomical Observatory of NAS of Ukraine

    Science.gov (United States)

    Syniavskyi, I.; Ivanov, Yu.; Vidmachenko, A. P.; Sergeev, A.

    2015-08-01

    The construction of an imaging Stokes-polarimeter in the MAO NAS of Ukraine is proposed. It allows measuring the three components of the Stokes vector simultaneously in large FOV without restrictions on the relative aperture of the system. Moreover, the polarimeter can be converted to a low resolution spectropolarimeter by placement into optical axis of the transparence diffraction grating.

  13. [Scientific and practical activity of the Department of Muscle Biochemistry of the Palladin Institute of Biochemistry of NAS of Ukraine].

    Science.gov (United States)

    Vynogradova, R P; Danilova, V M; Yurasova, S P

    2017-01-01

    The article focuses on scientific and practical activity of the Department of Muscle Biochemistry of the Palladin Institute of Biochemistry of NAS of Ukraine in the context of its foundation and development. Main findings and practical achievements in the area of muscle biochemistry are summarized and discussed.

  14. The use of net analyte signal (NAS) in near infrared spectroscopy pharmaceutical applications: interpretability and figures of merit.

    Science.gov (United States)

    Sarraguça, Mafalda Cruz; Lopes, João Almeida

    2009-05-29

    Near infrared spectroscopy (NIRS) has been extensively used as an analytical method for quality control of solid dosage forms for the pharmaceutical industry. Pharmaceutical formulations can be extremely complex, containing typically one or more active product ingredients (API) and various excipients, yielding very complex near infrared (NIR) spectra. The NIR spectra interpretability can be improved using the concept of net analyte signal (NAS). NAS is defined as the part of the spectrum unique to the analyte of interest. The objective of this work was to compare two different methods to estimate the API's NAS vector of different pharmaceutical formulations. The main difference between the methods is the knowledge of API free formulations NIR spectra. The comparison between the two methods was assessed in a qualitative and quantitative way. Results showed that both methods produced good results in terms of the similarity between the NAS vector and the pure API spectrum, as well as in the ability to predict the API concentration of unknown samples. Moreover, figures of merit such as sensitivity, selectivity, and limit of detection were estimated in a straightforward manner.

  15. Cohort Profile : The National Academy of Sciences-National Research Council Twin Registry (NAS-NRC Twin Registry)

    NARCIS (Netherlands)

    Gatz, Margaret; Harris, Jennifer R.; Kaprio, Jaakko; McGue, Matt; Smith, Nicholas L.; Snieder, Harold; Spiro, Avron; Butler, David A.

    The National Academy of Sciences-National Research Council Twin Registry (NAS-NRC Twin Registry) is a comprehensive registry of White male twin pairs born in the USA between 1917 and 1927, both of the twins having served in the military. The purpose was medical research and ultimately improved

  16. EPA's Reanalysis of Key Issues Related to Dioxin Toxicity and Response to NAS Comments (Volume 1) (Interagency Science Discussion Draft)

    Science.gov (United States)

    EPA is releasing the draft report, EPA's Reanalysis of Key Issues Related to Dioxin Toxicity and Response to NAS Comments (Volume 1), that was distributed to Federal agencies and White House Offices for comment during the Science Discussion step of the Theoretical studies of optical gain tuning by hydrostatic pressure in GaInNAs/GaAs quantum wells

    International Nuclear Information System (INIS)

    Gladysiewicz, M.; Wartak, M. S.; Kudrawiec, R.

    2014-01-01

    In order to describe theoretically the tuning of the optical gain by hydrostatic pressure in GaInNAs/GaAs quantum wells (QWs), the optical gain calculations within kp approach were developed and applied for N-containing and N-free QWs. The electronic band structure and the optical gain for GaInNAs/GaAs QW were calculated within the 10-band kp model which takes into account the interaction of electron levels in the QW with the nitrogen resonant level in GaInNAs. It has been shown that this interaction increases with the hydrostatic pressure and as a result the optical gain for GaInNAs/GaAs QW decreases by about 40% and 80% for transverse electric and transverse magnetic modes, respectively, for the hydrostatic pressure change from 0 to 40 kilobars. Such an effect is not observed for N-free QWs where the dispersion of electron and hole energies remains unchanged with the hydrostatic pressure. This is due to the fact that the conduction and valence band potentials in GaInAs/GaAs QW scale linearly with the hydrostatic pressure

  17. Right and Wrong and Cultural Diversity: Replication of the 2002 NAS/Zogby Poll on Business Ethics

    Science.gov (United States)

    Ludlum, Marty; Mascaloinov, Sergei

    2004-01-01

    In April 2002, a NAS/Zogby poll found that only a quarter of sampled students perceived uniform standards of "right and wrong" and that most students felt that ethical behavior depends on cultural diversity. In this effort to replicate those findings in a larger sample of American college students, the authors obtained results that…

  18. Saúde e meio ambiente nas cidades: os desafios da saúde ambiental

    Directory of Open Access Journals (Sweden)

    Nelson Gouveia

    1999-02-01

    Full Text Available Dentro de alguns poucos anos, nosso planeta contará com mais habitantes em áreas urbanas do que em áreas rurais. A urbanização desenfreada, sem mecanismos regulatórios e de controle, típica dos países periféricos, trouxe consigo enormes repercussões na saúde da população. Problemas como a insuficiência dos serviços básicos de saneamento, coleta e destinação adequada do lixo e condições precárias de moradia, tradicionalmente relacionados com a pobreza e o subdesenvolvimento, somam-se agora à poluição química e física do ar, da água e da terra, problemas ambientais antes considerados "modernos". Novamente, é sobre as populações mais carentes que recai a maior parte dos efeitos negativos da urbanização, gerando uma situação de extrema desigualdade e iniqüidade ambiental e em saúde. Para reverter esse quadro é preciso que haja uma reincorporação das questões do meio ambiente nas políticas de saúde, e a integração dos objetivos da saúde ambiental numa ampla estratégia de desenvolvimento sustentável. Uma abordagem mais integrada, com mecanismos intersetoriais que possibilitem um diálogo amplo entre as partes, trará enormes benefícios na conquista de melhores condições de vida nas cidades. A saúde ambiental hoje tem o desafio de promover uma melhor qualidade de vida e saúde nas cidades e a oportunidade de enfrentar o absurdo quadro de exclusão social, sob a perspectiva da eqüidade.In a matter of few years there will be more urban than rural dwellers worldwide. The rapid urbanisation lacking adequate control and regulatory mechanisms typical of developing countries, brought along huge effects to the health of the population Environmental problems traditionally related to poverty and underdevelopment such as insufficient provision of sanitation services, waste collection and disposal, and precarious housing conditions, are now added to environmental problems considered of "modern life" such as the

  19. CONTROLE GERENCIAL: UMA ANÁLISE NAS EMPRESAS CONTÁBEIS DA CIDADE DE CAICÓ/RN.

    Directory of Open Access Journals (Sweden)

    Hugo Azevedo Rangel de Morais

    2016-07-01

    Full Text Available O controle gerencial de uma empresa é necessário para um eficiente desenvolvimento interno. Com as mudanças constantes das legislações e a evolução da tecnologia de informação nas empresas contábeis, é essencial o controle gerencial eficiente, através dele será possível identificar como está a vida da empresa no seu dia a dia. Neste trabalho é mostrada a importância do controle gerencial para as empresas contábeis, situada na cidade de Caicó/RN, demonstrando a sua importância para auxiliar nas tomadas de decisões. O presente estudo tem como objetivo geral de analisar se nas empresas contábeis da cidade de Caicó existe um controle gerencial que os auxiliem nas tomadas de decisões, verificando se existem a prestação de serviço de contabilidade gerencial e analisando a importância do controle gerencial para os empresários. A contextualização do tema trata-se de pesquisas bibliográficas. A metodologia desenvolvida na pesquisa é classificada como descritiva, do ponto de vista de sua natureza é uma pesquisa aplicada, tendo uma abordagem qualitativa por ter caráter exploratório, já no que se refere aos procedimentos técnicos trata-se de um levantamento. Observou que os empresários tem conhecimento em relação à importância do controle gerencial, a maioria coloca em pratica obtendo uma boa classificação dos controles com confiabilidade para auxiliar nas tomadas de decisões, foi detectado que a maioria das empresas faz um planejamento dos objetivos a serem controlados.

  1. UAS Integration in the NAS Project: Flight Test 3 Data Analysis of JADEM-Autoresolver Detect and Avoid System

    Science.gov (United States)

    Gong, Chester; Wu, Minghong G.; Santiago, Confesor

    2016-01-01

    The Unmanned Aircraft Systems Integration in the National Airspace System project, or UAS Integration in the NAS, aims to reduce technical barriers related to safety and operational challenges associated with enabling routine UAS access to the NAS. The UAS Integration in the NAS Project conducted a flight test activity, referred to as Flight Test 3 (FT3), involving several Detect-and-Avoid (DAA) research prototype systems between June 15, 2015 and August 12, 2015 at the Armstrong Flight Research Center (AFRC). This report documents the flight testing and analysis results for the NASA Ames-developed JADEM-Autoresolver DAA system, referred to as 'Autoresolver' herein. Four flight test days (June 17, 18, 22, and July 22) were dedicated to Autoresolver testing. The objectives of this test were as follows: 1. Validate CPA prediction accuracy and detect-and-avoid (DAA, formerly known as self-separation) alerting logic in realistic flight conditions. 2. Validate DAA trajectory model including maneuvers. 3. Evaluate TCAS/DAA interoperability. 4. Inform final Minimum Operating Performance Standards (MOPS). Flight test scenarios were designed to collect data to directly address the objectives 1-3. Objective 4, inform final MOPS, was a general objective applicable to the UAS in the NAS project as a whole, of which flight test is a subset. This report presents analysis results completed in support of the UAS in the NAS project FT3 data review conducted on October 20, 2015. Due to time constraints and, to a lesser extent, TCAS data collection issues, objective 3 was not evaluated in this analysis.

  2. Procedimiento para la obtención de levaduras vínicas superproductoras de manoproteínas mediante tecnologías no recombinantes

    OpenAIRE

    Barcenilla Moraleda, José María; González Ramos, Daniel; Tabera, Laura; González García, Ramón

    2008-01-01

    Procedimiento para la obtención de levaduras vínicas superproductoras de manoproteínas mediante tecnologías no recombinantes. Procedimiento para obtener cepas de levaduras superproductoras de manoproteínas mediante la selección de mutantes resistentes a la toxina K9, cepas obtenibles por dicho procedimiento y usos.

  3. Análise das práticas de evidenciação de informações obrigatórias, não-obrigatórias e avançadas nas demonstrações contábeis das sociedades anônimas no Brasil: um estudo comparativo dos exercícios de 2002 e 2005 Analysis of mandatory, non-mandatory and advanced information disclosure practices in financial statements of companies in Brazil: a comparative study between 2002 and 2005

    Directory of Open Access Journals (Sweden)

    Vera Maria Rodrigues Ponte

    2007-12-01

    Full Text Available Em todo o mundo se discute sobre a transparência e a qualidade na divulgação das informações contábeis. No sentido de contribuir para esse debate, o presente estudo procura responder à seguinte questão de pesquisa: Quais as mudanças percebidas na evidenciação de informações obrigatórias, não-obrigatórias e avançadas praticada pelas sociedades anônimas no Brasil? Trata-se de uma pesquisa exploratória-descritiva, cujas amostras são de natureza não-probabilística acidental. Foram analisadas as demonstrações contábeis de 95 empresas, referentes ao exercício de 2002, e 119 alusivas ao exercício de 2005. No tocante aos itens recomendados pelos pareceres nos 15/87, 17/89 e 19/90 da CVM, a pesquisa revela a não-ocorrência de melhoria das práticas de disclosure das companhias estudadas. Com relação às informações contábeis de natureza avançada e não-obrigatória propugnadas pelas práticas de governança corporativa, verifica-se um avanço na sua evidenciação pelas empresas analisadas, que dispensam atenção especial à divulgação de suas práticas de responsabilidade social e do Balanço Social, das Demonstrações do Fluxo de Caixa (DFC e do Valor Adicionado (DVA.All over the world, there have been discussions on transparency and quality in the disclosure of accounting information. Aiming at contributing towards this debate, this study seeks to answer the following research question: What are the perceived changes in the disclosure of mandatory, non-mandatory and advanced accounting reporting experienced by companies in Brazil? Financial statements from 95 companies were assessed, referring to corporate annual reports of 2002 and from 119 companies referring to corporate annual reports of 2005. Concerning items recommended in rules numbers 15/87, 17/89 and 19/90 by the Brazilian Securities and Exchange Commission, this research reveals that there was no improvement in the disclosure practices of the companies

  4. Um estudo experimental sobre os afundamentos nas trilhas de rodas de pavimentos delgados com basaltos alterados

    Directory of Open Access Journals (Sweden)

    Washington Peres Núnez

    2010-04-01

    Full Text Available

    Este trabalho apresenta os resultados de um estudo em verdadeira grandeza sobre afundamentos nas trilhas de roda em pavimentos delgados com bases de basaltos alterados. Um simulador linear de tráfego aplicou mais de 267.000 ciclos de carga, variáveis entre 82 e 130 kN, a cinco pistas experimentais construídas na Área de Pesquisas e Testes de Pavimentos, localizada em Porto Alegre. As estruturas ensaiadas continham bases com espessuras de 16, 21 e 32 cm, construídas com basaltos alterados de duas procedências distintas. Um total de 4.148 flechas, medidas periodicamente, forneceram um conjunto de dados consistente para uma análise estatística significativa. A evolução das flechas mostrou depender das características do tráfego e das estruturas dos pavimentos. Considerando os afundamentos nas trilhas de roda como causa da degradação de pavimentos delgados e uma flecha de 25 mm como critério de ruptura, calcularam-se fatores de equivalência de carga, através de uma análise de confiabilidade.

    ABSTRACT

    This paper presents the results of a study of rutting of thin pavements where weathered basalts were used as base layers. A linear traffic simulator applied more than 267,000 axle loads, ranging from 82 to 130 kN, to five full-scale test sections built in a Pavement Testing Facility located in Porto Alegre (Southern Brazil. Two different weathered basalts and three base thicknesses (16 cm; 21 cm and 32 cm were used. A total of 4,148 measurements of rut depth, made at intervals, provided a statistically significant data set. Rutting evolution showed to depend not only on traffic characteristics but also on pavement structure. Considering rutting as major failure cause of thin pavements and a rut depth of 25 mm as terminal criterion, load equivalence factors were calculated, by means of a reliability analysis.

  5. A Importância do Planejamento Tributário nas Micro e Pequenas Empresas

    Directory of Open Access Journals (Sweden)

    Lucilaine Escobar Teixeira Sampaio

    2015-06-01

    Full Text Available O objetivo geral deste estudo é demonstrar a importância do planejamento tributário para as micro e pequenas empresas (MPE no Brasil. Já os alvos específicos são: identificar as formas de tributação existentes que podem ser utilizadas pelas MPE e verificar qual é a mais viável. Para atingir os objetivos deste estudo, foi realizada pesquisa bibliográfica a partir de artigos e livros do acervo da Faculdade Anhanguera, da Universidade Católica Dom Bosco (UCDB e de acervos pessoais, bem como de artigos de periódicos e legislação disponíveis na internet e manuais de pesquisa no site do Serviço Brasileiro de Apoio às Micro e Pequenas Empresas (Sebrae. Dos resultados encontrados, verificou-se que as MPE possuem papel relevante para a economia e, por esse motivo, o governo, nas últimas décadas, tem proporcionado vários incentivos visando estimular a abertura de mais empresas desse porte, bem como evitar que muitas entrem em falência, tendo em vista que pesquisas realizadas, principalmente pelo Sebrae, demonstram que é alto o índice de MPE que entram em falência todos os anos. Nas últimas décadas, após os incentivos adotados, entre eles a criação do Simples, nova forma de tributação especial para as MPE, verificou-se uma redução no fechamento de empresas desse porte, porém, o índice continua elevado. Várias são as causas para a falência das MPE, e dentre elas se destaca a alta carga tributária. Dessa forma, torna-se imprescindível que os empresários invistam na realização do planejamento tributário, a fim de escolher a melhor forma de tributação para evitar a falência.

  6. A amizade nas relações de ensino e aprendizagem

    Directory of Open Access Journals (Sweden)

    Elaine Conte

    2016-06-01

    Full Text Available http://dx.doi.org/10.5007/2175-795X.2016v34n1p205 Este texto propõe ressignificar os vínculos comunicativos da amizade, revelada no entusiasmo que aproxima aprendizagens nas práticas interconectadas, para explorar como ela pode trazer sensibilidade e novas percepções ao processo de formação humana. Também procura investigar quais as possibilidades e limites das relações de amizade na contemporaneidade, sustentada por fios invisíveis, mesmo a distância. A amizade constitui-se ao mesmo tempo num espaço político, ético, estético e social de diferentes formas de percepção e é uma ação pedagógica de interlocução e construção de aprendizagens colaborativas e formativas. A amizade como possibilidade e risco, abertura e superação das diferenças, liberdade para as possibilidades de questionamento, pensamento e transformação recíproca, apresenta-se como mediadora dos processos educativos e do aprender constante. Há certamente referências efetivas da amizade que nutrem o conhecimento pela confiança baseada no respeito mútuo, daí a necessidade de pensar a educação numa relação processual e dialógica de amizade em tempos de mutabilidade e de redes interativas. É justamente a partir dos laços e expressões de amizade que seria criativo e desafiador insistir na abertura à alteridade, ao encontro e às possibilidades de transformação humana para as mudanças das condições nas quais a vida se vê ameaçada. Investigar como se configuram as relações de amizade na atualidade em suas diferenças e diálogos em redes de aprendizagem fortalece o debate sobre o sentido da relação educador e educando, criando laços de amizade e afetividade, contribuindo para uma educação mais participativa, inacabada e de colaboração formativa.

  7. Imprensa e voto nas eleições presidenciais brasileiras de 2002 e 2006

    Directory of Open Access Journals (Sweden)

    Pedro Santos Mundim

    2012-02-01

    Full Text Available O artigo apresenta os resultados de uma pesquisa sobre os efeitos da cobertura da imprensa no voto nas eleições presidenciais brasileiras de 2002 e 2006. Argumenta-se que ela foi um fator importante em ambos os pleitos. A variável dependente é formada pelas séries históricas de intenção de voto dos principais candidatos: Lula (Partido dos Trabalhadores, Serra (Partido da Social Democracia Brasileira, Garotinho (Partido Socialista Brasileiro e Ciro (Partido Popular Socialista em 2002, e Lula, Alckmin (Partido da Social Democracia Brasileira, Heloísa Helena (Partido Socialismo e Liberdade e Cristovam Buarque (Partido Democrático Trabalhista em 2006. A principal variável explicativa é a cobertura eleitoral de quatro grandes jornais do país: Folha de S. Paulo, O Estado de S. Paulo, O Globo e Jornal do Brasil. Completam o modelo as seguintes variáveis de controle: propaganda partidária dos candidatos, o Horário Político Gratuito Eleitoral no 1º e 2º turnos, os debates presidenciais e o índice de popularidade presidencial. Os modelos foram estimados via MQO. Os resultados dos testes indicam que, em 2002, a cobertura da imprensa de Lula e Ciro Gomes foi uma das responsáveis pela variação observada nas suas respectivas intenções de voto. Em 2006, a dinâmica foi um pouco mais complexa. Apenas as intenções de voto em Heloísa Helena foram afetadas por sua própria cobertura. A princípio, é surpreendente que a cobertura extremamente negativa de Lula não tenha lhe custado votos. Mas ela teve um impacto indireto, e importante, para Alckmin e Cristovam Buarque. Como esse impacto foi maior durante o escândalo do dossiê tucano, podese afirmar que a cobertura da imprensa contribuiu decisivamente para a ocorrência do 2º turno na última eleição presidencial. Esses resultados mantêm-se mesmo quando se analisam os votos de eleitores de grupos de escolaridade distintos, um controle para os diferentes níveis de exposição aos

  8. Estrutura de capital: o papel das fontes de financiamento nas quais companhias abertas brasileiras se baseiam

    Directory of Open Access Journals (Sweden)

    Wilson Tarantin Junior

    2015-01-01

    Full Text Available ResumoEste estudo avaliou a estrutura de capital de companhias abertas brasileiras, no período de 2005 a 2012, verificando o papel das fontes de financiamento nas quais tais companhias se baseiam. Para tanto, foi avaliada a proporção das dívidas em 3 fontes distintas: as instituições financeiras, o mercado de capitais e as fontes com taxas de juros subsidiadas, estas últimas representando um fator institucional da economia brasileira. Foi utilizada uma amostra de 95 empresas, dentre as 150 maiores empresas com ações negociadas na Bolsa de Valores, Mercadorias e Futuros de São Paulo (BM&FBOVESPA. Por meio de modelos com dados em painel, os resultados mostram que as fontes de financiamento impactam a formação da estrutura de capital das empresas, exercendo influência tanto na alavancagem quanto na maturidade das dívidas. Em relação à alavancagem, empresas que têm maior proporção de seus recursos captados no mercado de capitais são mais alavancadas. O mesmo não acontece com as empresas que têm maior proporção de recursos subsidiados. Em relação à maturidade das dívidas, recursos de diferentes maturidades são captados em diferentes fontes: os recursos de menores maturidades são captados em instituições financeiras e os recursos de maiores maturidades são captados no mercado de capitais e nas fontes com taxas de juros subsidiadas - leia-se Banco Nacional de Desenvolvimento Econômico e Social (BNDES. Comparando-se os recursos do mercado de capitais com os recursos subsidiados, verifica-se que os primeiros têm maior maturidade. Tal resultado pode ser justificado em virtude do crescimento do mercado de capitais brasileiro nos últimos anos, a partir de 2009, de modo que as companhias estão se baseando no mercado de capitais para seus financiamentos de maior maturidade e nos recursos subsidiados, do BNDES, para seus financiamentos de maturidades intermediárias.

  9. Efeito do armazenamento de argilas esmectíticas nas suas propriedades reológicas

    Directory of Open Access Journals (Sweden)

    I. A. da Silva

    Full Text Available Resumo Este trabalho investigou a influência do armazenamento de argilas esmectíticas naturais e industrializadas nas suas propriedades reológicas, tendo em vista que a reação de dupla troca que ocorre após o tratamento das argilas policatiônicas com o Na2CO3 é reversível. Os fenômenos envolvidos nesta reação ainda não são totalmente conhecidos e estudos anteriores mostram melhoras em algumas propriedades. As propriedades reológicas foram determinadas em argilas sódicas em 1995 e policatiônicas aditivadas com carbonato de sódio (Na2CO3 em 2015. A caracterização física, química e mineralógica das amostras foi realizada através das seguintes técnicas: análise granulométrica por difração a laser, composição química por fluorescência de raios X, difração de raios X e análises térmicas (DTA e TG. A reologia das dispersões foi determinada através da viscosidade aparente, viscosidade plástica e volume do filtrado, onde posteriormente foram consideradas normas da indústria do petróleo apenas como parâmetro de referência. Os resultados mostraram que as condições de armazenamento, umidade e tamanho de partículas das amostras trouxeram melhoras nas suas propriedades reológicas no decorrer dos anos, indicando a não reversibilidade da reação de troca de cátions, o que é importante na sua validade após fabricação.

  10. OS USOS DO FACEBOOK NAS MANIFESTAÇÕES DOS SIMBOLISMOS ORGANIZACIONAIS

    Directory of Open Access Journals (Sweden)

    Camila Uliana Donna

    Full Text Available Este artigo tem o objetivo de compreender a relação entre os usos do Facebook pelos membros do jornal on-line XYZ e as manifestações dos simbolismos organizacionais. Para contextualizar o caminho adotado para tratar do objetivo foram articuladas contribuições teóricas sobre interacionismo simbólico, interpretativismo e simbolismo organizacional. Tais contribuições baseiam a discussão de que a interação social, a comunicação e os usos do Facebook estão relacionados entre si no cotidiano organizacional. A partir dessa relação no cotidiano, diferentes grupos sociais elaboram construções simbólicas com o potencial de marcar o contexto organizacional. Parte-se do entendimento de que isso ocorre na medida em que os simbolismos construídos interferem nas articulações entre os próprios grupos sociais nas organizações. O método qualitativo norteou a abordagem empírica neste estudo. A coleta de dados foi realizada mediante pesquisa bibliográfica e documental, netnografia e entrevistas semiestruturadas. O tratamento dos dados se deu por meio da análise de conteúdo, na modalidade temática. Após a análise, observou-se que o Facebook é um canal de trocas simbólicas entre os sujeitos na organização, porém, nessa mídia, essas trocas são veladas. Evidenciou-se um entendimento compartilhado de que no Facebook há muita exposição e por isso as pessoas têm medo de postar informações pessoais ou sobre o trabalho, pois acreditam estar sendo vigiadas. Nesse contexto, outras redes sociais digitais também foram identificadas como veículo de troca de conteúdos simbólicos.

  11. Saúde nas metrópoles - Doenças infecciosas

    Directory of Open Access Journals (Sweden)

    Aluisio Cotrim Segurado

    2016-04-01

    Full Text Available A urbanização é um processo irreversível em escala mundial e estima-se que o número de pessoas que vivem em cidades deverá atingir 67% da população do planeta até 2050. Os países de baixa ou média renda, por sua vez, possuem 30% a 40% da população urbana vivendo atualmente em favelas, em situação de risco para diversos agravos de saúde. No Brasil, embora 84,3% da população residissem em áreas urbanas já em 2010, não se verificam no momento ações consistentes voltadas ao enfrentamento das questões de saúde urbana. Neste artigo discute-se a situação epidemiológica de agravos infecciosos de interesse para a saúde pública (dengue, infecção por HIV/aids, leptospirose, hanseníase e tuberculose a partir do ano 2000 nas 17 metrópoles do país, de modo a esclarecer o papel atual das doenças infecciosas no contexto da saúde urbana brasileira.

  12. VISUALIDADE, MEMÓRIA E SONHO NAS DRAMATURGIAS DE PHILIPPE GENTY

    Directory of Open Access Journals (Sweden)

    Flávia Ruchdeschel D'ávila

    2016-12-01

    Full Text Available Philippe Genty é um dramaturgo francês que começou a atuar na década de 60, como marionetista. Mundialmente conhecido por seus espetáculos e por séries televisivas concebidas entre os anos 70 e 80, com o passar do tempo o seu trabalho como marionetista evoluiu para um tipo de teatro que o artista denomina como teatro visual. O teatro visual é um conceito surgido na década de 80 na Europa e, no caso de Philippe Genty, à guisa de uma definição prévia desse termo, podemos afirmar que a marionete deixou de ser o principal elemento de seus espetáculos e que todos os elementos da encenação, inclusive o elemento humano, passaram a contribuir significativamente para a construção das suas novas dramaturgias. Nesses espetáculos o texto – entendido aqui como aquele texto que o ator fala em cena – tem a mesma ou menor importância que os outros elementos expressivos – como a música, a dança, a luz, a matéria, os objetos, os atuantes, o espaço. Assim como a visualidade, a memória e o sonho podem ser considerados materiais expressivos que desempenham significativo papel nas dramaturgias de Philippe Genty e, é acerca desses tópicos, que discorreremos neste artigo.

  13. House of Cards: dna Shakesperiano na trilogia e nas séries

    Directory of Open Access Journals (Sweden)

    Brunilda T. Reichmann

    2017-12-01

    Full Text Available Este artigo faz uma leitura de produções literárias e televisivas contemporâneas, da trilogia política House of Cards, To Play the King e The Final Cut, do escritor inglês Michael Dobbs; da série da BBC: The House of Cards Trilogy, adaptação desses romances; bem como da maxissérie da Netflix, House of Cards, baseada nas anteriores. Detém-se em alguns elementos literários dessas produções e os relaciona com peças do dramaturgo inglês William Shakespeare, mais especificamente, a Macbeth, a Otelo, o Mouro de Veneza e a Ricardo III. Tentamos demonstrar como romancistas, roteiristas e diretores de séries celebram a inigualável arte de Shakespeare ao retrabalhar temas, atualizar contextos e reconstruir traços de personalidade de seus personagens inesquecíveis. Em suma, este texto visa resgatar algumas características genéticas das peças de Shakespeare presentes na produção artística/midiática contemporânea.

  14. O Paic e a equidade nas escolas de ensino fundamental cearenses

    Directory of Open Access Journals (Sweden)

    Paula Kasmirski

    2017-12-01

    Full Text Available O objetivo deste artigo é verificar se o Programa Alfabetização na Idade Certa (Paic contribuiu para a melhoria da equidade nas redes municipais de ensino do estado do Ceará. Conceituou-se equidade como uma situação em que todos os alunos, independentemente de sua situação de origem, atingem níveis apropriados de resultado. Como medida de resultado, usou-se a proficiência dos alunos em Língua Portuguesa (LP na Prova Brasil. Avaliou-se o impacto do Paic sobre a probabilidade de um aluno atingir um desempenho adequado pelo método de diferenças-em-diferenças. Concluiu-se que o Paic melhorou a equidade na coorte analisada, pois aumentou a proporção de alunos que atingem a proficiência apropriada em LP, em especial em escolas cuja maioria dos alunos é pobre. A análise da política revelou que o Paic apresenta componentes alinhados com princípios de justiça que se ajustam ao objetivo de equidade na escola de educação básica.

  15. Lei 11.645/08: a questão étnico-cultural nas escolas

    Directory of Open Access Journals (Sweden)

    Rosangela Gomes Moreira

    2012-12-01

    Full Text Available Este artigo traz informações sobre o conhecimento que professores e alunos têm da Lei 11.645/2008, que torna obrigatório o ensino das culturas negra e indígena nas Instituições de ensino de todo país. A pesquisa foi desenvolvida com alunos e professores de escolas públicas e privadas do município de Sinop-MT e constatou que o preconceito ainda se faz presente em nossa sociedade, principalmente quando se refere aos negros e aos índios. Verificou-se que a escola é um caminho para a superação dos preconceitos. Dessa forma concluiu-se que o trabalho que está sendo desenvolvido em algumas escolas começa a mostrar resultados positivos na busca da eliminação do preconceito e do racismo, fazendo com que os alunos passem a ver o outro com respeito. Palavras-chave: educação; lei 11.645/2008; professores e alunos.

  16. Estrategias de obtención de proteínas recombinantes en Escherichia coli

    Directory of Open Access Journals (Sweden)

    José García

    2013-08-01

    Full Text Available La expresión de proteínas recombinantes se ha favorecido con el uso de Escherichia coli debido a su relativo bajo costo, alta densidad de cultivo, su fácil manipulación genética y a las diversas herramientas biotecnológicas disponibles que son compatibles. En este artículo se presentan algunas estrategias para la expresión de Escherichia coli; se destacan factores genéticos y fisiológicos que incluyen: número de copias del vector de expresión, características del gen, estabilidad del ácido ribonucleico mensajero, promotor empleado, cepa utilizada, composición del medio de cultivo, parámetros de operación en el fermentador y también se abordan la conservación de cepas y la estrategia de cultivo y purificación.

  17. A educação nas constituições brasileiras

    Directory of Open Access Journals (Sweden)

    Raquel Recker Rabello Bulhões

    2009-02-01

    Full Text Available A preocupação do Poder Público, no que se refere à educação,encontra-se presente em todas constituições brasileiras: desde a primeiradelas, pós-independência, outorgada por D. Pedro I, em 1824; passando pelaRepublicana de 1891; a do Estado Novo de 1934; a de 1937; a de 1946,quando da redemocratização do país; seguida pela de 1967, de inspiraçãomilitar com limitação do poder da sociedade civil na escolha de seusgovernantes; com a agravante do AI-5 de 1968, que desencadeou a EmendaConstitucional no 1 de 1969, até chegarmos à Constituição de 1988, oitavaconstituição brasileira, denominada pelo seu principal artífice, o deputadoUlysses Guimarães, de “Constituição cidadã”. Apesar disso, observar-se queo enfoque dado à educação nas constituições brasileiras nem sempre foi omesmo, sofrendo consideráveis modificações com o decurso do tempo.

  18. FUTEBOL FEMININO E AS BARREIRAS DO SEXISMO NAS ESCOLAS: reflexões acerca da invisibilidade

    Directory of Open Access Journals (Sweden)

    Cássia Cristina Furlan

    2009-12-01

    Full Text Available http://dx.doi.org/10.5007/2175-8042.2008n30p28 A pesquisa teve por objetivo observar jogadoras de futebol e suas autorrepresentações, como elas veem a participação das mulheres no futebol e se as escolas incentivam a prática dessa atividade, verificando questões de gênero. Apresenta também análise das interfaces e desdobramentos dessa prática no interior da escola e da educação física escolar. Com metodologia qualitativa, foram realizadas entrevistas com acadêmicas de educação física e atletas. Verificou-se que ainda estão presentes preconceitos em relação ao futsal feminino, e que as condições de acesso e participação nas práticas corporais e esportivas ainda favorecem o universo masculino.

  19. Teorias utilizadas nas investigações sobre gestão do conhecimento

    Directory of Open Access Journals (Sweden)

    Luiza A. O. P. Xavier

    2012-12-01

    Full Text Available A gestão do conhecimento tem sido reconhecida por pesquisadores e usuários como crucial para o crescimento e desenvolvimento das organizações. A área de Sistemas de Informação contribui com investigações sobre gestão do conhecimento. Este artigo tem como objetivo identificar as principais teorias que foram utilizadas nas investigações sobre a gestão do conhecimento, considerando as teorias relacionadas com a área de sistemas de informação. As teorias identificadas em cerca de 40% dos artigos analisados são: Game Theory, Social Capital Theory, Theory of Planned Behavior, Social Exchange Theory, Dynamic Capabilities e Theory of Reasoned Action. Observou-se que as teorias são utilizadas tanto em investigações sobre gestão do conhecimento, como em pesquisas sobre uma das etapas do seu processo, ou seja, o compartilhamento do conhecimento. Destaca-se também que determinado aspecto da gestão do conhecimento, por exemplo, antecedente do compartilhamento de conhecimento, pode ser investigado utilizando diferentes teorias.

  20. Identidade, pertencimento e engajamento político nas mídias sociais

    Directory of Open Access Journals (Sweden)

    Pedro Simonard

    2017-09-01

    Full Text Available Adotando uma perspectiva interdisciplinar, este artigo analisa as mudanças introduzidas pelas mídias sociais na sociedade partindo de conceitos como identidade, território e políticas públicas e sua transposição para o mundo virtual, aqui considerado objeto multidisciplinar. As mídias sociais como território de pertencimento, onde é possível a governos estabelecerem mecanismos de controle de políticas públicas mais democráticas e participativas devido à interação virtual, são premissas defendidas neste artigo, que ao final mostra como governo e comunidade hoje têm a possibilidade de dialogar e construir ações, projetos e programas a partir do engajamento e do ativismo político praticado nas redes sociais. Como metodologia, foi realizada uma revisão bibliográfica que permitiu a discussão de conceitos oriundos de distintas áreas de conhecimento, estabelecendo uma relação entre identidade, território e mídias sociais.

  1. LEGITIMIDADE CULTURAL LOCAL NAS PRÁTICAS ESTRATÉGICAS DE PMES

    Directory of Open Access Journals (Sweden)

    Henrique Muzzio

    2011-12-01

    Full Text Available O objetivo deste ensaio é analisar a relação entre as estratégias de Pequenas e Médias Empresas (PMEs com os valores culturais e as práticas institucionais locais na busca pela competitividade. Correntes teóricas como organização industrial (PORTER, 1980, baseadas em recursos (BARNEY, 1991 e capacidades dinâmicas (TEECE; PISANO e SHUEN, 1997, possuem distintas explicações para as organizações tornarem-se competitivas. A legitimidade cultural local trabalha com a perspectiva de que as práticas estratégicas são socialmente construídas e precisam estar legitimadas pelos stakeholderes locais. Dada a natureza deste trabalho, apresentamos uma discussão teórica amparada nas teorias: institucional (DIMAGGIO e POWELL, 1983, do agente (EMIRBAYER e MISCHE, 1998 e cultural (SMIRCICH, 1983. Suas conclusões enfatizam que as PMEs poderiam desenvolver estratégias baseadas na legitimidade cultural local como meio adicional de alcançar competitividade frente a grandes empresas. Indicam-se ainda pesquisas que possam suprir as lacunas do campo.

  2. Plurinacionalidade e cosmopolitismo: a diversidade cultural das cidades e diversidade comportamental nas metrópoles

    Directory of Open Access Journals (Sweden)

    José Luiz Quadros de Magalhães

    2009-12-01

    Full Text Available O artigo analisa dois fenômenos contemporâneos importantes: a formação do estado plurinacional como ruptura com o estado moderno, nacional e uniformizador e as múltiplas identidades nas metrópoles cosmopolitas contemporâneas. Analisando a formação do estado nacional como uniformizador e não democrático, o texto procura estabelecer uma conexão entre os dois fenômenos e busca uma solução democrática plural que, ao mesmo tempo que reconhece as múltiplas identificações, busca um traço comum de humanidade em cada pessoa, que permita a construção de espaços plurais de dialogo em condição de igualdade na diversidade, contra o risco da fragmentação excessiva de caráter intolerante ou fascista.

  3. InGaNAs/GaAs multi-quantum wells and superlattices solar cells

    International Nuclear Information System (INIS)

    Courel Piedrahita, Maykel; Rimada Herrera, Julio Cesar; Hernandez Garcia, Luis

    2011-01-01

    A theoretical study of the GaAs/InGaNAs solar cells based on a multi-quantum wells (MQWSC) and superlattices (SLSC) configuration is presented for the first time. The conversion efficiency as a function of wells width and depth is modeled. The photon absorption increases with the well levels incorporation and therefore the photocurrent as well. It is shown that the MQWSC efficiency overcomes the solar cells without wells about 25%. A study of the SLSC viability is also presented. The conditions for resonant tunneling are established by the matrix transfer method for a superlattice with variable quantum wells width. The effective density of states and the absorption coefficients for SL structure are calculated in order to determinate the JV characteristic. The influence of the superlattice or cluster width in the cell efficiency is researched showing a better performance when width and the number of cluster are increased. The SLSC efficiency is compared with the optimum efficiency obtained for the MQWSC showing that it is reached an amazing increment of 27%. (author)

  4. A cultura do estupro como método perverso de controle nas sociedades patriarcais

    Directory of Open Access Journals (Sweden)

    Andrea Almeida Campos

    2016-08-01

    Full Text Available O presente artigo, ao conceber o crime de estupro como a expressão de uma perversão daqueles que o cometem, sendo o crime tipificado como hediondo no Brasil, tem por escopo responder os porquês de sua tolerância e naturalização, mormente nas sociedades de modelo patriarcal. Essa tolerância não apenas diz respeito a sua impunidade, mas envolve um conjunto de práticas que vigiam, manipulam, censuram o comportamento e dilaceram o corpo da vítima. O artigo sustenta que essas práticas integrariam métodos de controle de uma cultura denominada de “cultura do estupro”. Sendo esse método próprio das sociedades patriarcais que teria no estupro um de seus instrumentos de domínio fálico e de manutenção de suas estruturas de poder. A metodologia empregada foi a da revisão bibliográfica, ou seja, fontes secundárias. Tendo o artigo como principais referenciais teóricos, as obras de Friedrich Engels, de Sigmund Freud e de Michel Foucault.

  5. Responsabilidade social nas organizações de trabalho: benevolência ou culpa?

    Directory of Open Access Journals (Sweden)

    Maria de Lurdes Costa Domingos

    Full Text Available A responsabilidade social das organizações compreende ações concretas expostas publicamente para anunciar uma nova postura dos dirigentes dos sistemas de produção frente às contradições e às tensões provocadas pelo capitalismo na sociedade. A partir dos conhecimentos da psicossociologia, nosso objetivo é refletir sobre a emergência desse tema para além do discurso positivista tradicional. Primeiramente analisaremos o assunto como parte da "nova ordem social" afirmada com a globalização. Nesse sentido, consideraremos a importância de observá-la em sua complexidade, compreendida, simultaneamente, enquanto ordem e desordem, racionalidade e subjetividade do sistema capitalista que a criou. Posteriormente, destacaremos a responsabilidade social nas organizações, reveladora da sua preocupação com a sustentabilidade social, mas também como meio de remissão da culpa pela destruição que o sistema produtivo impõe à imensa maioria da humanidade.

  6. A reconstrução da realidade nas Ciências Sociais

    Directory of Open Access Journals (Sweden)

    Florestan Fernandes

    2004-07-01

    Full Text Available Nota dos editores: O artigo "A reconstrução da realidade nas Ciências Sociais" foi publicado originalmente em duas partes. A primeira apareceu no número 82 da Revista Anhembi conforme a nota explicativa do texto. A segunda parte foi publicada no número 83 da mesma Revista, que saiu no mês de outubro de 1957. As duas partes do artigo foram publicadas como um capítulo do livro de Florestan Fernandes, "Fundamentos Empíricos da Explicação Sociológica", editado pela Cia. Ed. Nacional em 1959. A Editora Ática republicou o artigo completo no livro "Florestan Fernandes – Coleção Grandes Cientistas Sociais", organizado por Octavio Ianni em 1986. A Revista Mediações, mediante autorização, publica a primeira parte do artigo apresentado no número 82 da Revista Anhembi. As notas aparecem numeradas no texto entre parênteses.

  7. Air Traffic Controller Performance and Acceptability of Multiple UAS in a Simulated NAS Environment

    Science.gov (United States)

    Vu, Kim-Phuong L.; Strybel, Thomas; Chiappe, Dan; Morales, Greg; Battiste, Vernol; Shively, Robert Jay

    2014-01-01

    Previously, we showed that air traffic controllers (ATCos) rated UAS pilot verbal response latencies as acceptable when a 1.5 s delay was added to the UAS pilot responses, but a 5 s delay was rated as mostly unacceptable. In the present study we determined whether a 1.5 s added delay in the UAS pilots' verbal communications would affect ATCos interactions with UAS and other conventional aircraft when the number and speed of the UAS were manipulated. Eight radar-certified ATCos participated in this simulation. The ATCos managed a medium altitude sector containing arrival aircraft, en route aircraft, and one to four UAS. The UAS were conducting a surveillance mission and flew at either a "slow" or "fast" speed. We measured both UAS and conventional pilots' verbal communication latencies, and obtained ATCos' acceptability ratings for these latencies. Although the UAS pilot response latencies were longer than those of conventional pilots, the ATCos rated UAS pilot verbal communication latencies to be as acceptable as those of conventional pilots. Because the overall traffic load within the sector was held constant, ATCos only performed slightly worse when multiple UAS were in their sector compared to when only one UAS was in the sector. Implications of these findings for UAS integration in the NAS are discussed.

  8. Imagens Intoleráveis: horror e morte nas embalagens de produtos de tabaco

    Directory of Open Access Journals (Sweden)

    Ana Amélia Erthal

    2014-07-01

    Full Text Available O artigo pretende explorar o conjunto das advertências sanitárias em mensagens de texto e imagens, utilizadas nas embalagens de produtos de tabaco como determinação da Organização Mundial de Saúde, para controle da epidemia do consumo de cigarros em todo o mundo. O Brasil foi um dos quatro países a estampar em até 50% as embalagens com imagens tétricas alertando para as consequências advindas do hábito de fumar e, suas demais políticas de controle são tomadas como exemplo de boas práticas em outros países. O objetivo é questionar se as imagens comunicam seu propósito a despeito de sua dramaticidade. Para tanto serão usados os conceitos de “imagem intolerável”, de Jacques Rancière, e a credibilidade e verossimilhança em George Didi-Huberman e Philippe Dubois, respectivamente.

  9. Sociedades multiculturais nas instituições de educação formal

    Directory of Open Access Journals (Sweden)

    Franciele dos Santos

    2012-06-01

    Full Text Available O presente artigo objetiva apresentar algumas considerações em torno das sociedades multiculturais nas instituições de educação formal e como os diferentes grupos sociais podem estar vivendo uma educação a margem da qualidade do ensino por conta de suas diferentes culturas, procuramos pontuar ainda contribuições de alguns autores sobre o tema. Nossa intenção é chamar a atenção para as consequências de uma educação que não considera a subjetividade de cada sujeito inserido no espaço escolar, gerando desrespeito e intolerância, o que pode interferir na formação e qualidade do ensino de cada aluno. Não pretendemos abordar um segmento específico, apenas atentar para a necessidade de respeito às diferentes culturas, a luz de alguns autores que discutem o mesmo tema.Palavras-chave: educação; multiculturalismo; sociedades multiculturais.

  10. Trabalho docente nas universidades federais: tensões e contradições

    Directory of Open Access Journals (Sweden)

    Denise Lemos

    Full Text Available Este artigo analisa o trabalho docente nas Universidades Federais, em especial na Universidade Federal da Bahia, entre 2005 e 2008, a partir do fenômeno da precarização social do trabalho e da consequente alienação do trabalhador, baseando-se nos resultados de pesquisa de doutorado realizada na UFBA. Descreve as dimensões fundamentais desse processo: a multiplicidade de tarefas, a captação de recursos internos e externos para a pesquisa, as contradições entre a formação e as demandas do sistema universitário, a sobrecarga de trabalho e suas consequências, como ausência do lazer, perda de controle sobre o projeto acadêmico e adoecimento. Conclui que a principal contradição vivida pelo docente é a de que a autonomia percebida por ele não é a exercida, uma vez que está submetido a diversos controles internos e externos do sistema meritocrático, cujas exigências ultrapassam a capacidade física e psíquica do professor para responder adequadamente. Entretanto, compreender o processo de alienação é a base para a transformação e emancipação daqueles que possuem o papel fundamental de desenvolver as capacidades do outro.

  11. HISTÓRIAS DE RETIRANTES: RUÍNAS LITERÁRIAS NO CINEMA

    Directory of Open Access Journals (Sweden)

    Elisabete Alfeld Rodrigues

    2009-06-01

    Full Text Available Em O caminho das nuvens, Vicente Amorim conta a história da retirada de uma família do interior nordestino para a região sudeste. Uma história que já foi muitas vezes contada, por exemplo, em Vidas Secas: primeiro, na literatura, por Graciliano Ramos e, depois, no cinema, por Nélson Pereira dos Santos. Na retirada contada por Amorim, a migração não ocorre pelos passos na aridez do sertão; ela acontece por bicicletas nas rodovias, estradas e pequenas cidades que a família – pai, mãe e cinco filhos – percorre para chegar à cidade do Rio de Janeiro. A família se desloca da Paraíba em busca de um emprego de mil reais por mês que só pode ser encontrado no sudeste do Brasil – este motivo desencadeador parte de uma história factual. O filme é uma releitura que resgata a literatura, o cinema e a história factual promovendo uma relação dialógica construída por fragmentos das várias histórias.

  12. Perfil de comercialização das Anonáceas nas Ceasas brasileiras

    Directory of Open Access Journals (Sweden)

    Hélio Satoshi Watanabe

    2014-01-01

    Full Text Available A quantidade de anonáceas comercializadas nas principais centrais de abastecimento está crescendo e concentrada na CEAGESP - 61%. As informações coletadas pelo SIEM da CEAGESP mostram, entre 2011 e 2012, o grande crescimento da oferta de atemoia e de graviola, respectivamente, 35% e 32%, entre 2011 e 2012, e a queda do volume de pinha - 20%, entre 2011 e 2012. A atemoia (54%, a pinha (41% e a graviola (5% são as anonáceas mais importantes comercializadas na CEAGESP. A origem é concentrada nos Estados da Bahia - pinha e graviola, e em Minas Gerais e São Paulo - atemoia. O estudo das causas da diferenciação de valor, entre lotes de atemoia de valores máximo e mínimo, de mesma classificação de tamanho, no mesmo dia, mostrou que a homogeneidade visual de tamanho é a maior responsável pela diferenciação de valor. A melhoria da seleção por tamanho é a melhor estratégia de diferenciação de valor a ser adotada pelo produtor.

  13. 1-eV GaInNAs solar cells for ultrahigh-frequency multijunction devices

    Energy Technology Data Exchange (ETDEWEB)

    Friedman, D.J.; Geisz, J.F.; Kurtz, S.R.; Olson, J.M. [National Renewable Energy Lab., Golden, CO (United States)

    1998-09-01

    The authors demonstrate working prototypes of a GaInNAs-based solar cell lattice-matched to GaAs with photoresponse down to 1 eV. This device is intended for use as the third junction of future-generation ultrahigh-efficiency three- and four-junction devices. Under the AM1.5 direct spectrum with all the light higher in energy than the GaAs band gap filtered out, the prototypes have open-circuit voltages ranging from 0.35 to 0.44 V, short-circuit currents of 1.8 mA/cm{sup 2}, and fill factors from 61--66%. The short-circuit currents are of principal concern: the internal quantum efficiencies rise only to about 0.2. The authors discuss the short diffusion lengths which are the reason for this low photocurrent. As a partial workaround for the poor diffusion lengths, they demonstrate a depletion-width-enhanced variation of one of the prototype devices that grades off decreased voltage for increased photocurrent, with a short-circuit current of 6.5 mA/cm{sup 2} and an open-circuit voltage of 0.29 V.

  14. Advanced Imaging Tracker

    Science.gov (United States)

    1982-06-01

    3oijsteii 14.) v ilson f3 1vj Ar inj~on, VA Ž222JY L. or p AtLnZ: -,iii~iarn Ouriey I d.LU ýu ndoldt i~d txlJlýjLuqucqe, 14,, f. /A i’ Atun : ý...34ivision iY) LiOX d /M ye I v - iv i S i 01-1 I ,/L i r ic u .L Lbr~ J o-r y At~rlt -lex earker 1.i nj .on, A½iI r -I efnLi.) DL-4 NAS’A Aines Atun

  15. Advanced Chemical Propulsion

    Science.gov (United States)

    Bai, S. Don

    2000-01-01

    Design, propellant selection, and launch assistance for advanced chemical propulsion system is discussed. Topics discussed include: rocket design, advance fuel and high energy density materials, launch assist, and criteria for fuel selection.

  16. Advanced CANDU reactors

    International Nuclear Information System (INIS)

    Dunn, J.T.; Finlay, R.B.; Olmstead, R.A.

    1988-12-01

    AECL has undertaken the design and development of a series of advanced CANDU reactors in the 700-1150 MW(e) size range. These advanced reactor designs are the product of ongoing generic research and development programs on CANDU technology and design studies for advanced CANDU reactors. The prime objective is to create a series of advanced CANDU reactors which are cost competitive with coal-fired plants in the market for large electricity generating stations. Specific plant designs in the advanced CANDU series will be ready for project commitment in the early 1990s and will be capable of further development to remain competitive well into the next century

  17. Advances in chemical Physics

    CERN Document Server

    Rice, Stuart A

    2011-01-01

    The Advances in Chemical Physics series-the cutting edge of research in chemical physics The Advances in Chemical Physics series provides the chemical physics and physical chemistry fields with a forum for critical, authoritative evaluations of advances in every area of the discipline. Filled with cutting-edge research reported in a cohesive manner not found elsewhere in the literature, each volume of the Advances in Chemical Physics series offers contributions from internationally renowned chemists and serves as the perfect supplement to any advanced graduate class devoted to the study of che

  18. Advances in chemical physics

    CERN Document Server

    Rice, Stuart A

    2012-01-01

    The Advances in Chemical Physics series-the cutting edge of research in chemical physics The Advances in Chemical Physics series provides the chemical physics field with a forum for critical, authoritative evaluations of advances in every area of the discipline. Filled with cutting-edge research reported in a cohesive manner not found elsewhere in the literature, each volume of the Advances in Chemical Physics series serves as the perfect supplement to any advanced graduate class devoted to the study of chemical physics. This volume explores: Quantum Dynamical Resonances in Ch

  19. Advances in chemical physics

    CERN Document Server

    Rice, Stuart A

    2011-01-01

    The Advances in Chemical Physics series-the cutting edge of research in chemical physics The Advances in Chemical Physics series provides the chemical physics and physical chemistry fields with a forum for critical, authoritative evaluations of advances in every area of the discipline. Filled with cutting-edge research reported in a cohesive manner not found elsewhere in the literature, each volume of the Advances in Chemical Physics series offers contributions from internationally renowned chemists and serves as the perfect supplement to any advanced graduate class devoted to the study of che

  20. ACR-700 advanced technologies

    International Nuclear Information System (INIS)

    Tapping, R.L.; Turner, C.W.; Yu, S.K.W.; Olmstead, R.; Speranzini, R.A.

    2004-01-01

    A successful advanced reactor plant will have optimized economics including reduced operating and maintenance costs, improved performance, and enhanced safety. Incorporating improvements based on advanced technologies ensures cost, safety and operational competitiveness of the ACR-700. These advanced technologies include modern configuration management; construction technologies; operational technology for the control centre and information systems for plant monitoring and analysis. This paper summarizes the advanced technologies used to achieve construction and operational improvements to enhance plant economic competitiveness, advances in the operational technology used for reactor control, and presents the development of the Smart CANDU suite of tools and its application to existing operating reactors and to the ACR-700. (author)