WorldWideScience

Sample records for computational center rsicc

  1. Knowledge management: Role of the the Radiation Safety Information Computational Center (RSICC)

    Science.gov (United States)

    Valentine, Timothy

    2017-09-01

    The Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL) is an information analysis center that collects, archives, evaluates, synthesizes and distributes information, data and codes that are used in various nuclear technology applications. RSICC retains more than 2,000 software packages that have been provided by code developers from various federal and international agencies. RSICC's customers (scientists, engineers, and students from around the world) obtain access to such computing codes (source and/or executable versions) and processed nuclear data files to promote on-going research, to ensure nuclear and radiological safety, and to advance nuclear technology. The role of such information analysis centers is critical for supporting and sustaining nuclear education and training programs both domestically and internationally, as the majority of RSICC's customers are students attending U.S. universities. Additionally, RSICC operates a secure CLOUD computing system to provide access to sensitive export-controlled modeling and simulation (M&S) tools that support both domestic and international activities. This presentation will provide a general review of RSICC's activities, services, and systems that support knowledge management and education and training in the nuclear field.

  2. The Radiation Safety Information Computational Center (RSICC): A Resource for Nuclear Science Applications

    International Nuclear Information System (INIS)

    Kirk, Bernadette Lugue

    2009-01-01

    The Radiation Safety Information Computational Center (RSICC) has been in existence since 1963. RSICC collects, organizes, evaluates and disseminates technical information (software and nuclear data) involving the transport of neutral and charged particle radiation, and shielding and protection from the radiation associated with: nuclear weapons and materials, fission and fusion reactors, outer space, accelerators, medical facilities, and nuclear waste management. RSICC serves over 12,000 scientists and engineers from about 100 countries. An important activity of RSICC is its participation in international efforts on computational and experimental benchmarks. An example is the Shielding Integral Benchmarks Archival Database (SINBAD), which includes shielding benchmarks for fission, fusion and accelerators. RSICC is funded by the United States Department of Energy, Department of Homeland Security and Nuclear Regulatory Commission.

  3. The Radiation Safety Information Computational Center (RSICC): A Resource for Nuclear Science Applications

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL

    2009-01-01

    The Radiation Safety Information Computational Center (RSICC) has been in existence since 1963. RSICC collects, organizes, evaluates and disseminates technical information (software and nuclear data) involving the transport of neutral and charged particle radiation, and shielding and protection from the radiation associated with: nuclear weapons and materials, fission and fusion reactors, outer space, accelerators, medical facilities, and nuclear waste management. RSICC serves over 12,000 scientists and engineers from about 100 countries.

  4. The Role of the Radiation Safety Information Computational Center (RSICC) in Knowledge Management

    International Nuclear Information System (INIS)

    Valentine, T.

    2016-01-01

    Full text: The Radiation Safety Information Computational Center (RSICC) is an information analysis center that collects, archives, evaluates, synthesizes and distributes information, data and codes that are used in various nuclear technology applications. RSICC retains more than 2,000 packages that have been provided by contributors from various agencies. RSICC’s customers obtain access to such computing codes (source and/or executable versions) and processed nuclear data files to promote on-going research, to help ensure nuclear and radiological safety, and to advance nuclear technology. The role of such information analysis centers is critical for supporting and sustaining nuclear education and training programmes both domestically and internationally, as the majority of RSICC’s customers are students attending U.S. universities. RSICC also supports and promotes workshops and seminars in nuclear science and technology to further the use and/or development of computational tools and data. Additionally, RSICC operates a secure CLOUD computing system to provide access to sensitive export-controlled modeling and simulation (M&S) tools that support both domestic and international activities. This presentation will provide a general review of RSICC’s activities, services, and systems that support knowledge management and education and training in the nuclear field. (author

  5. COMPUTATIONAL SCIENCE CENTER

    International Nuclear Information System (INIS)

    DAVENPORT, J.

    2006-01-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together

  6. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT,J.

    2004-11-01

    The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security.

  7. Listing of Available ACE Data Tables

    Energy Technology Data Exchange (ETDEWEB)

    Conlin, Jeremy Lloyd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-31

    This document is divided into multiple sections. Section 2 lists some of the more frequently used ENDF/B reaction types that can be used with the FM input card. The remaining sections (described below) contain tables showing the available ACE data tables for various types of data. These ACE data libraries are distributed by the Radiation Safety Information Computational Center (RSICC) with MCNP6.

  8. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT, J.

    2006-11-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to

  9. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT, J.

    2005-11-01

    The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

  10. MCNP application for the 21 century

    International Nuclear Information System (INIS)

    McKinney, G.W.

    2000-01-01

    The Los Alamos National Laboratory (LANL) Monte Carlo N-Particle radiation transport code, MCNP, has become an international standard for a wide spectrum of neutron, photon, and electron radiation transport applications. The latest version of the code, MCNP 4C, was released to the Radiation Safety Information Computational Center (RSICC) in February 2000. This paper describes the code development philosophy, new features and capabilities, applicability to various problems, and future directions

  11. Basic data, computer codes and integral experiments: The tools for modelling in nuclear technology

    International Nuclear Information System (INIS)

    Sartori, E.

    2001-01-01

    When studying applications in nuclear technology we need to understand and be able to predict the behavior of systems manufactured by human enterprise. First, the underlying basic physical and chemical phenomena need to be understood. We have then to predict the results from the interplay of the large number of the different basic events: i.e. the macroscopic effects. In order to be able to build confidence in our modelling capability, we need then to compare these results against measurements carried out on such systems. The different levels of modelling require the solution of different types of equations using different type of parameters. The tools required for carrying out a complete validated analysis are: - The basic nuclear or chemical data; - The computer codes, and; - The integral experiments. This article describes the role each component plays in a computational scheme designed for modelling purposes. It describes also which tools have been developed and are internationally available. The role of the OECD/NEA Data Bank, the Radiation Shielding Information Computational Center (RSICC), and the IAEA Nuclear Data Section are playing in making these elements available to the community of scientists and engineers is described. (author)

  12. Center for Advanced Computational Technology

    Science.gov (United States)

    Noor, Ahmed K.

    2000-01-01

    The Center for Advanced Computational Technology (ACT) was established to serve as a focal point for diverse research activities pertaining to application of advanced computational technology to future aerospace systems. These activities include the use of numerical simulations, artificial intelligence methods, multimedia and synthetic environments, and computational intelligence, in the modeling, analysis, sensitivity studies, optimization, design and operation of future aerospace systems. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The Center has four specific objectives: 1) conduct innovative research on applications of advanced computational technology to aerospace systems; 2) act as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); 3) help in identifying future directions of research in support of the aeronautical and space missions of the twenty-first century; and 4) help in the rapid transfer of research results to industry and in broadening awareness among researchers and engineers of the state-of-the-art in applications of advanced computational technology to the analysis, design prototyping and operations of aerospace and other high-performance engineering systems. In addition to research, Center activities include helping in the planning and coordination of the activities of a multi-center team of NASA and JPL researchers who are developing an intelligent synthesis environment for future aerospace systems; organizing workshops and national symposia; as well as writing state-of-the-art monographs and NASA special publications on timely topics.

  13. Building the Teraflops/Petabytes Production Computing Center

    International Nuclear Information System (INIS)

    Kramer, William T.C.; Lucas, Don; Simon, Horst D.

    1999-01-01

    In just one decade, the 1990s, supercomputer centers have undergone two fundamental transitions which require rethinking their operation and their role in high performance computing. The first transition in the early to mid-1990s resulted from a technology change in high performance computing architecture. Highly parallel distributed memory machines built from commodity parts increased the operational complexity of the supercomputer center, and required the introduction of intellectual services as equally important components of the center. The second transition is happening in the late 1990s as centers are introducing loosely coupled clusters of SMPs as their premier high performance computing platforms, while dealing with an ever-increasing volume of data. In addition, increasing network bandwidth enables new modes of use of a supercomputer center, in particular, computational grid applications. In this paper we describe what steps NERSC is taking to address these issues and stay at the leading edge of supercomputing centers.; N

  14. Activity report of Computing Research Center

    Energy Technology Data Exchange (ETDEWEB)

    1997-07-01

    On April 1997, National Laboratory for High Energy Physics (KEK), Institute of Nuclear Study, University of Tokyo (INS), and Meson Science Laboratory, Faculty of Science, University of Tokyo began to work newly as High Energy Accelerator Research Organization after reconstructing and converting their systems, under aiming at further development of a wide field of accelerator science using a high energy accelerator. In this Research Organization, Applied Research Laboratory is composed of four Centers to execute assistance of research actions common to one of the Research Organization and their relating research and development (R and D) by integrating the present four centers and their relating sections in Tanashi. What is expected for the assistance of research actions is not only its general assistance but also its preparation and R and D of a system required for promotion and future plan of the research. Computer technology is essential to development of the research and can communize for various researches in the Research Organization. On response to such expectation, new Computing Research Center is required for promoting its duty by coworking and cooperating with every researchers at a range from R and D on data analysis of various experiments to computation physics acting under driving powerful computer capacity such as supercomputer and so forth. Here were described on report of works and present state of Data Processing Center of KEK at the first chapter and of the computer room of INS at the second chapter and on future problems for the Computing Research Center. (G.K.)

  15. Center for computer security: Computer Security Group conference. Summary

    Energy Technology Data Exchange (ETDEWEB)

    None

    1982-06-01

    Topics covered include: computer security management; detection and prevention of computer misuse; certification and accreditation; protection of computer security, perspective from a program office; risk analysis; secure accreditation systems; data base security; implementing R and D; key notarization system; DOD computer security center; the Sandia experience; inspector general's report; and backup and contingency planning. (GHT)

  16. Digital optical computers at the optoelectronic computing systems center

    Science.gov (United States)

    Jordan, Harry F.

    1991-01-01

    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  17. National Energy Research Scientific Computing Center (NERSC): Advancing the frontiers of computational science and technology

    Energy Technology Data Exchange (ETDEWEB)

    Hules, J. [ed.

    1996-11-01

    National Energy Research Scientific Computing Center (NERSC) provides researchers with high-performance computing tools to tackle science`s biggest and most challenging problems. Founded in 1974 by DOE/ER, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that followed. Over the years the center`s name was changed to the National Magnetic Fusion Energy Computer Center and then to NERSC; it was relocated to LBNL. NERSC, one of the largest unclassified scientific computing resources in the world, is the principal provider of general-purpose computing services to DOE/ER programs: Magnetic Fusion Energy, High Energy and Nuclear Physics, Basic Energy Sciences, Health and Environmental Research, and the Office of Computational and Technology Research. NERSC users are a diverse community located throughout US and in several foreign countries. This brochure describes: the NERSC advantage, its computational resources and services, future technologies, scientific resources, and computational science of scale (interdisciplinary research over a decade or longer; examples: combustion in engines, waste management chemistry, global climate change modeling).

  18. Transportation Research & Analysis Computing Center

    Data.gov (United States)

    Federal Laboratory Consortium — The technical objectives of the TRACC project included the establishment of a high performance computing center for use by USDOT research teams, including those from...

  19. SCALE criticality safety verification and validation package

    International Nuclear Information System (INIS)

    Bowman, S.M.; Emmett, M.B.; Jordan, W.C.

    1998-01-01

    Verification and validation (V and V) are essential elements of software quality assurance (QA) for computer codes that are used for performing scientific calculations. V and V provides a means to ensure the reliability and accuracy of such software. As part of the SCALE QA and V and V plans, a general V and V package for the SCALE criticality safety codes has been assembled, tested and documented. The SCALE criticality safety V and V package is being made available to SCALE users through the Radiation Safety Information Computational Center (RSICC) to assist them in performing adequate V and V for their SCALE applications

  20. Human-centered Computing: Toward a Human Revolution

    OpenAIRE

    Jaimes, Alejandro; Gatica-Perez, Daniel; Sebe, Nicu; Huang, Thomas S.

    2007-01-01

    Human-centered computing studies the design, development, and deployment of mixed-initiative human-computer systems. HCC is emerging from the convergence of multiple disciplines that are concerned both with understanding human beings and with the design of computational artifacts.

  1. Engineering computations at the national magnetic fusion energy computer center

    International Nuclear Information System (INIS)

    Murty, S.

    1983-01-01

    The National Magnetic Fusion Energy Computer Center (NMFECC) was established by the U.S. Department of Energy's Division of Magnetic Fusion Energy (MFE). The NMFECC headquarters is located at Lawrence Livermore National Laboratory. Its purpose is to apply large-scale computational technology and computing techniques to the problems of controlled thermonuclear research. In addition to providing cost effective computing services, the NMFECC also maintains a large collection of computer codes in mathematics, physics, and engineering that is shared by the entire MFE research community. This review provides a broad perspective of the NMFECC, and a list of available codes at the NMFECC for engineering computations is given

  2. The role of dedicated data computing centers in the age of cloud computing

    Science.gov (United States)

    Caramarcu, Costin; Hollowell, Christopher; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2017-10-01

    Brookhaven National Laboratory (BNL) anticipates significant growth in scientific programs with large computing and data storage needs in the near future and has recently reorganized support for scientific computing to meet these needs. A key component is the enhanced role of the RHIC-ATLAS Computing Facility (RACF) in support of high-throughput and high-performance computing (HTC and HPC) at BNL. This presentation discusses the evolving role of the RACF at BNL, in light of its growing portfolio of responsibilities and its increasing integration with cloud (academic and for-profit) computing activities. We also discuss BNL’s plan to build a new computing center to support the new responsibilities of the RACF and present a summary of the cost benefit analysis done, including the types of computing activities that benefit most from a local data center vs. cloud computing. This analysis is partly based on an updated cost comparison of Amazon EC2 computing services and the RACF, which was originally conducted in 2012.

  3. Center for Computing Research Summer Research Proceedings 2015.

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, Andrew Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Parks, Michael L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-18

    The Center for Computing Research (CCR) at Sandia National Laboratories organizes a summer student program each summer, in coordination with the Computer Science Research Institute (CSRI) and Cyber Engineering Research Institute (CERI).

  4. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  5. NASA Center for Computational Sciences: History and Resources

    Science.gov (United States)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  6. Radiation shielding activities at the OECD/Nuclear Energy Agency

    International Nuclear Information System (INIS)

    Sartori, Enrico; Vaz, Pedro

    2000-01-01

    The OECD Nuclear Energy Agency (NEA) has devoted considerable effort over the years to radiation shielding issues. The issues are addressed through international working groups. These activities are carried out in close co-ordination and co-operation with the Radiation Safety Information Computational Center (RSICC). The areas of work include: basic nuclear data activities in support of radiation shielding, computer codes, shipping cask shielding applications, reactor pressure vessel dosimetry, shielding experiments database. The method of work includes organising international code comparison exercises and benchmark studies. Training courses on radiation shielding computer codes are organised regularly including hands-on experience in modelling skills. The scope of the activity covers mainly reactor shields and spent fuel transportation packages, but also fusion neutronics and in particular shielding of accelerators and irradiation facilities. (author)

  7. Center for computation and visualization of geometric structures. [Annual], Progress report

    Energy Technology Data Exchange (ETDEWEB)

    1993-02-12

    The mission of the Center is to establish a unified environment promoting research, education, and software and tool development. The work is centered on computing, interpreted in a broad sense to include the relevant theory, development of algorithms, and actual implementation. The research aspects of the Center are focused on geometry; correspondingly the computational aspects are focused on three (and higher) dimensional visualization. The educational aspects are likewise centered on computing and focused on geometry. A broader term than education is `communication` which encompasses the challenge of explaining to the world current research in mathematics, and specifically geometry.

  8. AHPCRC - Army High Performance Computing Research Center

    Science.gov (United States)

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms

  9. A Microsoft Windows version of the MCNP visual editor

    International Nuclear Information System (INIS)

    Schwarz, R.A.; Carter, L.L.; Pfohl, J.

    1999-01-01

    Work has started on a Microsoft Windows version of the MCNP visual editor. The MCNP visual editor provides a graphical user interface for displaying and creating MCNP geometries. The visual editor is currently available from the Radiation Safety Information Computational Center (RSICC) and the Nuclear Energy Agency (NEA) as software package PSR-358. It currently runs on the major UNIX platforms (IBM, SGI, HP, SUN) and Linux. Work has started on converting the visual editor to work in a Microsoft Windows environment. This initial work focuses on converting the display capabilities of the visual editor; the geometry creation capability of the visual editor may be included in future upgrades

  10. UC Merced Center for Computational Biology Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Colvin, Michael; Watanabe, Masakatsu

    2010-11-30

    Final report for the UC Merced Center for Computational Biology. The Center for Computational Biology (CCB) was established to support multidisciplinary scientific research and academic programs in computational biology at the new University of California campus in Merced. In 2003, the growing gap between biology research and education was documented in a report from the National Academy of Sciences, Bio2010 Transforming Undergraduate Education for Future Research Biologists. We believed that a new type of biological sciences undergraduate and graduate programs that emphasized biological concepts and considered biology as an information science would have a dramatic impact in enabling the transformation of biology. UC Merced as newest UC campus and the first new U.S. research university of the 21st century was ideally suited to adopt an alternate strategy - to create a new Biological Sciences majors and graduate group that incorporated the strong computational and mathematical vision articulated in the Bio2010 report. CCB aimed to leverage this strong commitment at UC Merced to develop a new educational program based on the principle of biology as a quantitative, model-driven science. Also we expected that the center would be enable the dissemination of computational biology course materials to other university and feeder institutions, and foster research projects that exemplify a mathematical and computations-based approach to the life sciences. As this report describes, the CCB has been successful in achieving these goals, and multidisciplinary computational biology is now an integral part of UC Merced undergraduate, graduate and research programs in the life sciences. The CCB began in fall 2004 with the aid of an award from U.S. Department of Energy (DOE), under its Genomes to Life program of support for the development of research and educational infrastructure in the modern biological sciences. This report to DOE describes the research and academic programs

  11. GU12QAD-3D a graphical interface program for QAD-CGPIC program

    International Nuclear Information System (INIS)

    Subbaiah, K. V.; Sarangapani, R.

    2002-01-01

    A point kernel code, QAD-CGPIC is developed by combining QAD-CGGP and PICTURE in a consistent fashion to utilize the capabilities of two independent codes. The code can be used for shielding calculations of gamma ray and fast neutron penetration through complex geometrical arrangements of shielding structures. Further modifications of the code are carried out to handle off-centered multiple identical sources. The input format structure is difficult to memorise while using the code. To circumvent this problem a graphical user - friendly interface, GUI2QAD-3D is developed with online context sensitive help under WINDOWS environment in Visual Basic. Several Benchmark tests of inputs are carried out to validate the modified code. The package comes in one Compact Disc and includes inputs for several practical problems relating to nuclear fuel reprocessing labs. The salient features of QAD-CGPIC and GUI2QAD-3D are listed below: i) Handles off-centered multiple identical sources ii) Cylindrical sources can be oriented parallel to any of X, Y, Z axes iii) Provides Plots of material cross sections and buildup factors for photons iv) Estimates dose rate for Point source-slab shield situations v) Interactive input preparation for the geometry vi) 3D view of the geometry with arbitrary rotation around X, Y or Z axes vii) Optional facility to indicate detector location viii) Provision to view Picture input file ix) Provision to calculate fission product gamma emission rates as function of time The code has been contributed to the computer code collection at Radiation Safety Information Computational centre (RSICC). The code is tested and validated at RSICC and listed as CCC-697- GUI2QAD-3D in their code depository

  12. GUI2QAD-3D a graphical interface program for QAD-CGPIC program

    International Nuclear Information System (INIS)

    Subbaiah, K.V.; Sarangapani, R.

    2002-01-01

    Full text: A point kernel code, QAD-CGPIC is developed by combining QAD-CGGP1 and PICTURE2 in a consistent fashion to utilize the capabilities of two independent codes. The code can be used for shielding calculations of gamma ray and fast neutron penetration through complex geometrical arrangements of shielding structures. Further modifications of the code are carried out to handle off-centered multiple identical sources. The input format structure is difficult to memorise while using the code. To circumvent this problem a graphical user friendly interface, GUI2QAD-3D is developed with online context sensitive help under WINDOWS environment in Visual Basic. Several Benchmark tests of inputs are carried out to validate the modified code. The package comes in one Compact Disc and includes inputs for several practical problems relating to nuclear fuel reprocessing labs. The salient features of QAD-CGPIC and GUI2QAD-3D are listed below: i) Handles off-centered multiple identical sources ii) Cylindrical sources can be oriented parallel to any of X, V, Z axes iii) Provides Plots of material cross sections and buildup factors for photons iv) Estimates dose rate for Point source-slab shield situations v) Interactive input preparation for the geometry vi) 3D view of the geometry with arbitrary rotation around X, Y or Z axes vii) Optional facility to indicate detector location viii) Provision to view Picture input file ix) Provision to calculate fission product gamma emission rates as function of time The code has been contributed to the computer code collection at Radiation Safety Information Computational centre (RSICC). The code is tested and validated at RSICC and listed as CCC-697-GUI2QAD-3D in their code depository

  13. Center for Computational Wind Turbine Aerodynamics and Atmospheric Turbulence

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær

    2014-01-01

    In order to design and operate a wind farm optimally it is necessary to know in detail how the wind behaves and interacts with the turbines in a farm. This not only requires knowledge about meteorology, turbulence and aerodynamics, but it also requires access to powerful computers and efficient s...... software. Center for Computational Wind Turbine Aerodynamics and Atmospheric Turbulence was established in 2010 in order to create a world-leading cross-disciplinary flow center that covers all relevant disciplines within wind farm meteorology and aerodynamics.......In order to design and operate a wind farm optimally it is necessary to know in detail how the wind behaves and interacts with the turbines in a farm. This not only requires knowledge about meteorology, turbulence and aerodynamics, but it also requires access to powerful computers and efficient...

  14. Supporting Human Activities - Exploring Activity-Centered Computing

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Bardram, Jakob

    2002-01-01

    In this paper we explore an activity-centered computing paradigm that is aimed at supporting work processes that are radically different from the ones known from office work. Our main inspiration is healthcare work that is characterized by an extreme degree of mobility, many interruptions, ad-hoc...

  15. Argonne Laboratory Computing Resource Center - FY2004 Report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R.

    2005-04-14

    In the spring of 2002, Argonne National Laboratory founded the Laboratory Computing Resource Center, and in April 2003 LCRC began full operations with Argonne's first teraflops computing cluster. The LCRC's driving mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting application use and development. This report describes the scientific activities, computing facilities, and usage in the first eighteen months of LCRC operation. In this short time LCRC has had broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. Steering for LCRC comes from the Computational Science Advisory Committee, composed of computing experts from many Laboratory divisions. The CSAC Allocations Committee makes decisions on individual project allocations for Jazz.

  16. Cloud Computing in Science and Engineering and the “SciShop.ru” Computer Simulation Center

    Directory of Open Access Journals (Sweden)

    E. V. Vorozhtsov

    2011-12-01

    Full Text Available Various aspects of cloud computing applications for scientific research, applied design, and remote education are described in this paper. An analysis of the different aspects is performed based on the experience from the “SciShop.ru” Computer Simulation Center. This analysis shows that cloud computing technology has wide prospects in scientific research applications, applied developments and also remote education of specialists, postgraduates, and students.

  17. The Computational Physics Program of the national MFE Computer Center

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1989-01-01

    Since June 1974, the MFE Computer Center has been engaged in a significant computational physics effort. The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generations of supercomputers. The Computational Physics Group has been involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to tokamaks and compact toroids. A third area is the investigation of kinetic instabilities using a 3-D particle code; this work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence have been under examination, with the hope of being able to explain anomalous transport. Also, we are collaborating in an international effort to evaluate fully three-dimensional linear stability of toroidal devices. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers. A summary of these programs are included in this paper. 6 tabs

  18. Computational geometry lectures at the morningside center of mathematics

    CERN Document Server

    Wang, Ren-Hong

    2003-01-01

    Computational geometry is a borderline subject related to pure and applied mathematics, computer science, and engineering. The book contains articles on various topics in computational geometry, which are based on invited lectures and some contributed papers presented by researchers working during the program on Computational Geometry at the Morningside Center of Mathematics of the Chinese Academy of Science. The opening article by R.-H. Wang gives a nice survey of various aspects of computational geometry, many of which are discussed in more detail in other papers in the volume. The topics include problems of optimal triangulation, splines, data interpolation, problems of curve and surface design, problems of shape control, quantum teleportation, and others.

  19. A Descriptive Study towards Green Computing Practice Application for Data Centers in IT Based Industries

    Directory of Open Access Journals (Sweden)

    Anthony Jnr. Bokolo

    2018-01-01

    Full Text Available The progressive upsurge in demand for processing and computing power has led to a subsequent upsurge in data center carbon emissions, cost incurred, unethical waste management, depletion of natural resources and high energy utilization. This raises the issue of the sustainability attainment in data centers of Information Technology (IT based industries. Green computing practice can be applied to facilitate sustainability attainment as IT based industries utilizes data centers to provide services to staffs, practitioners and end users. But it is a known fact that enterprise servers utilize huge quantity of energy and incur other expenditures in cooling operations and it is difficult to address the needs of accuracy and efficiency in data centers while yet encouraging a greener application practice alongside cost reduction. Thus this research study focus on the practice application of Green computing in data centers which houses servers and as such presents the Green computing life cycle strategies and best practices to be practiced for better management in data centers in IT based industries. Data was collected through questionnaire from 133 respondents in industries that currently operate their in-house data centers. The analysed data was used to verify the Green computing life cycle strategies presented in this study. Findings from the data shows that each of the life cycles strategies is significant in assisting IT based industries apply Green computing practices in their data centers. This study would be of interest to knowledge and data management practitioners as well as environmental manager and academicians in deploying Green data centers in their organizations.

  20. New computer system for the Japan Tier-2 center

    CERN Multimedia

    Hiroyuki Matsunaga

    2007-01-01

    The ICEPP (International Center for Elementary Particle Physics) of the University of Tokyo has been operating an LCG Tier-2 center dedicated to the ATLAS experiment, and is going to switch over to the new production system which has been recently installed. The system will be of great help to the exciting physics analyses for coming years. The new computer system includes brand-new blade servers, RAID disks, a tape library system and Ethernet switches. The blade server is DELL PowerEdge 1955 which contains two Intel dual-core Xeon (WoodCrest) CPUs running at 3GHz, and a total of 650 servers will be used as compute nodes. Each of the RAID disks is configured to be RAID-6 with 16 Serial ATA HDDs. The equipment as well as the cooling system is placed in a new large computer room, and both are hooked up to UPS (uninterruptible power supply) units for stable operation. As a whole, the system has been built with redundant configuration in a cost-effective way. The next major upgrade will take place in thre...

  1. Conception of a computer for the nuclear medical department of the Augsburg hospital center

    International Nuclear Information System (INIS)

    Graf, G.; Heidenreich, P.

    1984-01-01

    A computer system based on the Siemens R30 process computer has been employed at the Institute of Nuclear Medicine of the Augsburg Hospital Center since early 1981. This system, including the development and testing of organ-specific evaluation programs, was used as a basis for the conception of the new computer system for the department of nuclear medicine of the Augsburg Hospital Center. The computer system was extended and installed according to this conception when the new 1400-bed hospital was opened in the 3rd phase of construction in autumn 1982. (orig.) [de

  2. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    Science.gov (United States)

    Meyer, Jörg; Quadt, Arnulf; Weber, Pavel; ATLAS Collaboration

    2011-12-01

    GoeGrid is a grid resource center located in Göttingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields of grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community, GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and personpower resources.

  3. Status of electron transport in MCNP trademark

    International Nuclear Information System (INIS)

    Hughes, H.G.

    1997-01-01

    The latest version of MCNP, the Los Alamos Monte Carlo transport code, has now been officially released. MCNP4B has been sent to the Radiation Safety Information Computational Center (RSICC), in Oak Ridge, Tennessee, which is responsible for the further distribution of the code within the US. International distribution of MCNP is done by the Nuclear Energy Agency (ECD/NEA), in Paris, France. Readers with access to the World-Wide-Web should consult the MCNP distribution site http://www-xdiv.lanl.gov/XTM/mcnp/about.html for specific information about contacting RSICC and OECD/NEA. A variety of new features are available in MCNP4B. Among these are differential operator perturbations, cross-section plotting capabilities, enhanced diagnostics for transport in repeated structures and lattices, improved efficiency in distributed-memory multiprocessing, corrected particle lifetime and lifespan estimators, and expanded software quality assurance procedures and testing, including testing of the multigroup Boltzmann-Fokker-Planck capability. New and improved cross section sets in the form of ENDF/B-VI evaluations have also been recently released and can be used in MCNP4B. Perhaps most significant for the interests of this special session, the electron transport algorithm has been improved, especially in the collisional energy-loss straggling and the angular-deflection treatments. In this paper, the author concentrates on a fairly complete documentation of the current status of the electron transport methods in MCNP

  4. A multipurpose computing center with distributed resources

    Science.gov (United States)

    Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.

    2017-10-01

    The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.

  5. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  6. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  7. Applied Computational Fluid Dynamics at NASA Ames Research Center

    Science.gov (United States)

    Holst, Terry L.; Kwak, Dochan (Technical Monitor)

    1994-01-01

    The field of Computational Fluid Dynamics (CFD) has advanced to the point where it can now be used for many applications in fluid mechanics research and aerospace vehicle design. A few applications being explored at NASA Ames Research Center will be presented and discussed. The examples presented will range in speed from hypersonic to low speed incompressible flow applications. Most of the results will be from numerical solutions of the Navier-Stokes or Euler equations in three space dimensions for general geometry applications. Computational results will be used to highlight the presentation as appropriate. Advances in computational facilities including those associated with NASA's CAS (Computational Aerosciences) Project of the Federal HPCC (High Performance Computing and Communications) Program will be discussed. Finally, opportunities for future research will be presented and discussed. All material will be taken from non-sensitive, previously-published and widely-disseminated work.

  8. A Computer Learning Center for Environmental Sciences

    Science.gov (United States)

    Mustard, John F.

    2000-01-01

    In the fall of 1998, MacMillan Hall opened at Brown University to students. In MacMillan Hall was the new Computer Learning Center, since named the EarthLab which was outfitted with high-end workstations and peripherals primarily focused on the use of remotely sensed and other spatial data in the environmental sciences. The NASA grant we received as part of the "Centers of Excellence in Applications of Remote Sensing to Regional and Global Integrated Environmental Assessments" was the primary source of funds to outfit this learning and research center. Since opening, we have expanded the range of learning and research opportunities and integrated a cross-campus network of disciplines who have come together to learn and use spatial data of all kinds. The EarthLab also forms a core of undergraduate, graduate, and faculty research on environmental problems that draw upon the unique perspective of remotely sensed data. Over the last two years, the Earthlab has been a center for research on the environmental impact of water resource use in and regions, impact of the green revolution on forest cover in India, the design of forest preserves in Vietnam, and detailed assessments of the utility of thermal and hyperspectral data for water quality analysis. It has also been used extensively for local environmental activities, in particular studies on the impact of lead on the health of urban children in Rhode Island. Finally, the EarthLab has also served as a key educational and analysis center for activities related to the Brown University Affiliated Research Center that is devoted to transferring university research to the private sector.

  9. MCNP capabilities at the dawn of the 21st century: Neutron-gamma applications

    International Nuclear Information System (INIS)

    Selcow, E.C.; McKinney, G.W.

    2000-01-01

    The Los Alamos National Laboratory Monte Carlo N-Particle radiation transport code, MCNP, has become an international standard for a wide spectrum of neutron-gamma radiation transport applications. These include nuclear criticality safety, radiation shielding, nuclear safeguards, nuclear well-logging, fission and fusion reactor design, accelerator target design, detector design and analysis, health physics, medical radiation therapy and imaging, radiography, decontamination and decommissioning, and waste storage and disposal. The latest version of the code, MCNP4C, was released to the Radiation Safety Information Computational Center (RSICC) in February 2000.This paper described the new features and capabilities of the code, and discusses the specific applicability to neutron-gamma problems. We will also discuss the future directions for MCNP code development, including rewriting the code in Fortran 90

  10. Intention and Usage of Computer Based Information Systems in Primary Health Centers

    Science.gov (United States)

    Hosizah; Kuntoro; Basuki N., Hari

    2016-01-01

    The computer-based information system (CBIS) is adopted by almost all of in health care setting, including the primary health center in East Java Province Indonesia. Some of softwares available were SIMPUS, SIMPUSTRONIK, SIKDA Generik, e-puskesmas. Unfortunately they were most of the primary health center did not successfully implemented. This…

  11. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  12. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  13. CNC Turning Center Advanced Operations. Computer Numerical Control Operator/Programmer. 444-332.

    Science.gov (United States)

    Skowronski, Steven D.; Tatum, Kenneth

    This student guide provides materials for a course designed to introduce the student to the operations and functions of a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 presents course expectations and syllabus, covers safety precautions, and describes the CNC turning center components, CNC…

  14. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  15. The Erasmus Computing Grid - Building a Super-Computer Virtually for Free at the Erasmus Medical Center and the Hogeschool Rotterdam

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop

  16. Use of computers and Internet among people with severe mental illnesses at peer support centers.

    Science.gov (United States)

    Brunette, Mary F; Aschbrenner, Kelly A; Ferron, Joelle C; Ustinich, Lee; Kelly, Michael; Grinley, Thomas

    2017-12-01

    Peer support centers are an ideal setting where people with severe mental illnesses can access the Internet via computers for online health education, peer support, and behavioral treatments. The purpose of this study was to assess computer use and Internet access in peer support agencies. A peer-assisted survey assessed the frequency with which consumers in all 13 New Hampshire peer support centers (n = 702) used computers to access Internet resources. During the 30-day survey period, 200 of the 702 peer support consumers (28%) responded to the survey. More than 3 quarters (78.5%) of respondents had gone online to seek information in the past year. About half (49%) of respondents were interested in learning about online forums that would provide information and peer support for mental health issues. Peer support centers may be a useful venue for Web-based approaches to education, peer support, and intervention. Future research should assess facilitators and barriers to use of Web-based resources among people with severe mental illness in peer support centers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 1, Issue 2

    Science.gov (United States)

    2011-01-01

    area and the researchers working on these projects. Also inside: news from the AHPCRC consortium partners at Morgan State University and the NASA ...Computing Research Center is provided by the supercomputing and research facilities at Stanford University and at the NASA Ames Research Center at...atomic and molecular level, he said. He noted that “every general would like to have” a Star Trek -like holodeck, where holographic avatars could

  18. Argonne's Laboratory computing center - 2007 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R.; Pieper, G. W.

    2008-05-28

    Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific

  19. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  20. Vanderbilt University Institute of Imaging Science Center for Computational Imaging XNAT: A multimodal data archive and processing environment.

    Science.gov (United States)

    Harrigan, Robert L; Yvernault, Benjamin C; Boyd, Brian D; Damon, Stephen M; Gibney, Kyla David; Conrad, Benjamin N; Phillips, Nicholas S; Rogers, Baxter P; Gao, Yurui; Landman, Bennett A

    2016-01-01

    The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has developed a database built on XNAT housing over a quarter of a million scans. The database provides framework for (1) rapid prototyping, (2) large scale batch processing of images and (3) scalable project management. The system uses the web-based interfaces of XNAT and REDCap to allow for graphical interaction. A python middleware layer, the Distributed Automation for XNAT (DAX) package, distributes computation across the Vanderbilt Advanced Computing Center for Research and Education high performance computing center. All software are made available in open source for use in combining portable batch scripting (PBS) grids and XNAT servers. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Accurate Computation of Periodic Regions' Centers in the General M-Set with Integer Index Number

    Directory of Open Access Journals (Sweden)

    Wang Xingyuan

    2010-01-01

    Full Text Available This paper presents two methods for accurately computing the periodic regions' centers. One method fits for the general M-sets with integer index number, the other fits for the general M-sets with negative integer index number. Both methods improve the precision of computation by transforming the polynomial equations which determine the periodic regions' centers. We primarily discuss the general M-sets with negative integer index, and analyze the relationship between the number of periodic regions' centers on the principal symmetric axis and in the principal symmetric interior. We can get the centers' coordinates with at least 48 significant digits after the decimal point in both real and imaginary parts by applying the Newton's method to the transformed polynomial equation which determine the periodic regions' centers. In this paper, we list some centers' coordinates of general M-sets' k-periodic regions (k=3,4,5,6 for the index numbers α=−25,−24,…,−1 , all of which have highly numerical accuracy.

  2. The NIRA computer program package (photonuclear data center). Final report

    International Nuclear Information System (INIS)

    Vander Molen, H.J.; Gerstenberg, H.M.

    1976-02-01

    The Photonuclear Data Center's NIRA library of programs, executable from mass storage on the National Bureau of Standard's central computer facility, is described. Detailed instructions are given (with examples) for the use of the library to analyze, evaluate, synthesize, and produce for publication camera-ready tabular and graphical presentations of digital photonuclear reaction cross-section data. NIRA is the acronym for Nuclear Information Research Associate

  3. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    Science.gov (United States)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  4. Argonne's Laboratory Computing Resource Center 2009 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B. (CLS-CI)

    2011-05-13

    Now in its seventh year of operation, the Laboratory Computing Resource Center (LCRC) continues to be an integral component of science and engineering research at Argonne, supporting a diverse portfolio of projects for the U.S. Department of Energy and other sponsors. The LCRC's ongoing mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting high-performance computing application use and development. This report describes scientific activities carried out with LCRC resources in 2009 and the broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. The LCRC Allocations Committee makes decisions on individual project allocations for Jazz. Committee members are appointed by the Associate Laboratory Directors and span a range of computational disciplines. The 350-node LCRC cluster, Jazz, began production service in April 2003 and has been a research work horse ever since. Hosting a wealth of software tools and applications and achieving high availability year after year, researchers can count on Jazz to achieve project milestones and enable breakthroughs. Over the years, many projects have achieved results that would have been unobtainable without such a computing resource. In fiscal year 2009, there were 49 active projects representing a wide cross-section of Laboratory research and almost all research divisions.

  5. Modeling Remote I/O versus Staging Tradeoff in Multi-Data Center Computing

    International Nuclear Information System (INIS)

    Suslu, Ibrahim H

    2014-01-01

    In multi-data center computing, data to be processed is not always local to the computation. This is a major challenge especially for data-intensive Cloud computing applications, since large amount of data would need to be either moved the local sites (staging) or accessed remotely over the network (remote I/O). Cloud application developers generally chose between staging and remote I/O intuitively without making any scientific comparison specific to their application data access patterns since there is no generic model available that they can use. In this paper, we propose a generic model for the Cloud application developers which would help them to choose the most appropriate data access mechanism for their specific application workloads. We define the parameters that potentially affect the end-to-end performance of the multi-data center Cloud applications which need to access large datasets over the network. To test and validate our models, we implemented a series of synthetic benchmark applications to simulate the most common data access patterns encountered in Cloud applications. We show that our model provides promising results in different settings with different parameters, such as network bandwidth, server and client capabilities, and data access ratio

  6. Argonne's Laboratory computing resource center : 2006 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

    2007-05-31

    Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff

  7. The Role of Computers in Research and Development at Langley Research Center

    Science.gov (United States)

    Wieseman, Carol D. (Compiler)

    1994-01-01

    This document is a compilation of presentations given at a workshop on the role cf computers in research and development at the Langley Research Center. The objectives of the workshop were to inform the Langley Research Center community of the current software systems and software practices in use at Langley. The workshop was organized in 10 sessions: Software Engineering; Software Engineering Standards, methods, and CASE tools; Solutions of Equations; Automatic Differentiation; Mosaic and the World Wide Web; Graphics and Image Processing; System Design Integration; CAE Tools; Languages; and Advanced Topics.

  8. Computer-aided dispatch--traffic management center field operational test : Washington State final report

    Science.gov (United States)

    2006-05-01

    This document provides the final report for the evaluation of the USDOT-sponsored Computer-Aided Dispatch - Traffic Management Center Integration Field Operations Test in the State of Washington. The document discusses evaluation findings in the foll...

  9. Research and development of grid computing technology in center for computational science and e-systems of Japan Atomic Energy Agency

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2007-01-01

    Center for Computational Science and E-systems of the Japan Atomic Energy Agency (CCSE/JAEA) has carried out R and D of grid computing technology. Since 1995, R and D to realize computational assistance for researchers called Seamless Thinking Aid (STA) and then to share intellectual resources called Information Technology Based Laboratory (ITBL) have been conducted, leading to construct an intelligent infrastructure for the atomic energy research called Atomic Energy Grid InfraStructure (AEGIS) under the Japanese national project 'Development and Applications of Advanced High-Performance Supercomputer'. It aims to enable synchronization of three themes: 1) Computer-Aided Research and Development (CARD) to realize and environment for STA, 2) Computer-Aided Engineering (CAEN) to establish Multi Experimental Tools (MEXT), and 3) Computer Aided Science (CASC) to promote the Atomic Energy Research and Investigation (AERI). This article reviewed achievements in R and D of grid computing technology so far obtained. (T. Tanaka)

  10. Secure data exchange between intelligent devices and computing centers

    Science.gov (United States)

    Naqvi, Syed; Riguidel, Michel

    2005-03-01

    The advent of reliable spontaneous networking technologies (commonly known as wireless ad-hoc networks) has ostensibly raised stakes for the conception of computing intensive environments using intelligent devices as their interface with the external world. These smart devices are used as data gateways for the computing units. These devices are employed in highly volatile environments where the secure exchange of data between these devices and their computing centers is of paramount importance. Moreover, their mission critical applications require dependable measures against the attacks like denial of service (DoS), eavesdropping, masquerading, etc. In this paper, we propose a mechanism to assure reliable data exchange between an intelligent environment composed of smart devices and distributed computing units collectively called 'computational grid'. The notion of infosphere is used to define a digital space made up of a persistent and a volatile asset in an often indefinite geographical space. We study different infospheres and present general evolutions and issues in the security of such technology-rich and intelligent environments. It is beyond any doubt that these environments will likely face a proliferation of users, applications, networked devices, and their interactions on a scale never experienced before. It would be better to build in the ability to uniformly deal with these systems. As a solution, we propose a concept of virtualization of security services. We try to solve the difficult problems of implementation and maintenance of trust on the one hand, and those of security management in heterogeneous infrastructure on the other hand.

  11. Argonne's Laboratory Computing Resource Center : 2005 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

    2007-06-30

    Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure

  12. Information and psychomotor skills knowledge acquisition: A student-customer-centered and computer-supported approach.

    Science.gov (United States)

    Nicholson, Anita; Tobin, Mary

    2006-01-01

    This presentation will discuss coupling commercial and customized computer-supported teaching aids to provide BSN nursing students with a friendly customer-centered self-study approach to psychomotor skill acquisition.

  13. GUI2QAD-3D: A graphical interface program for QAD-CGPIC program

    International Nuclear Information System (INIS)

    Subbaiah, K.V.; Sarangapani, R.

    2006-01-01

    GUI2QAD-3D is graphical user interface developed in Visual Basic (VB) version 6.0 to prepare input for the QAD-CGPIC program. QAD-CGPIC is a FORTRAN code that combines QAD-CGGP (RSICC-CCC-493, USA) and PICTURE [Irving, D.C., Morrison, G.W., 1970. PICTURE-an aid in debugging GEOM input data, ORNL-TM-2892] for neutron and gamma-ray shielding calculations by the point kernel method in a consistent fashion to utilize the capabilities of two independent codes. The FORTRAN code calculates fast neutron and gamma-ray penetration through various shield configurations defined by combinatorial geometry specifications. It has provision to estimate buildup factor either from Geometric Progression (GP) coefficients (ANS-6.4.3, 1990) or from Capos'. Capabilities of the FORTRAN code is extended by modifying it to handle off-centred multiple identical sources. Several standard tests of inputs are carried out to validate the modified code. The FORTRAN code executable is created with a Lahey compiler. The user interface facilitates interactive viewing of the geometry of the system with online context sensitive help. Inputs for several practical problems relating to nuclear fuel reprocessing labs are provided. The software runs on Pentium III computers under windows environment and is transmitted in one CD. The software can be obtained from Radiation Safety Information and Computational Centre (RSICC), ORNL, USA with code package identification number CCC-697

  14. Computer Vision Syndrome among Call Center Employees at Telecommunication Company in Bandung

    Directory of Open Access Journals (Sweden)

    Ghea Nursyifa

    2016-06-01

    Full Text Available Background: The occurrence of Computer Vision Syndrome (CVS at the workplace has increased within decades due to theprolonged use of computers. Knowledge of CVS is necessary in order to develop an awareness of how to prevent and alleviate itsprevalence . The objective of this study was to assess the knowledge of CVS among call center employees and to explore the most frequent CVS symptom experienced by the workers. Methods: A descriptive cross sectional study was conducted during the period of September to November 2014 at Telecommunication Company in Bandung using a questionnaire consisting of 30 questions. Out of the 30 questions/statements, 15 statements were about knowledge of CVS and other 15 questions were about the occurrence of CVS and its symptoms. In this study 125 call center employees participated as respondents using consecutive sampling. The level of knowledge was divided into 3 categories: good (76–100%, fair (75–56% and poor (<56%. The collected data was presented in frequency tabulation. Results: There was 74.4% of the respondents had poor knowledge of CVS. The most symptom experienced by the respondents was asthenopia. Conclusions: The CVS occurs in call center employees with various symptoms and signs. This situation is not supported by good knowledge of the syndrome which can hamper prevention programs.

  15. Current state and future direction of computer systems at NASA Langley Research Center

    Science.gov (United States)

    Rogers, James L. (Editor); Tucker, Jerry H. (Editor)

    1992-01-01

    Computer systems have advanced at a rate unmatched by any other area of technology. As performance has dramatically increased there has been an equally dramatic reduction in cost. This constant cost performance improvement has precipitated the pervasiveness of computer systems into virtually all areas of technology. This improvement is due primarily to advances in microelectronics. Most people are now convinced that the new generation of supercomputers will be built using a large number (possibly thousands) of high performance microprocessors. Although the spectacular improvements in computer systems have come about because of these hardware advances, there has also been a steady improvement in software techniques. In an effort to understand how these hardware and software advances will effect research at NASA LaRC, the Computer Systems Technical Committee drafted this white paper to examine the current state and possible future directions of computer systems at the Center. This paper discusses selected important areas of computer systems including real-time systems, embedded systems, high performance computing, distributed computing networks, data acquisition systems, artificial intelligence, and visualization.

  16. Computer-aided dispatch--traffic management center field operational test : state of Utah final report

    Science.gov (United States)

    2006-07-01

    This document provides the final report for the evaluation of the USDOT-sponsored Computer-Aided Dispatch Traffic Management Center Integration Field Operations Test in the State of Utah. The document discusses evaluation findings in the followin...

  17. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    Science.gov (United States)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  18. Initial constructs for patient-centered outcome measures to evaluate brain-computer interfaces.

    Science.gov (United States)

    Andresen, Elena M; Fried-Oken, Melanie; Peters, Betts; Patrick, Donald L

    2016-10-01

    The authors describe preliminary work toward the creation of patient-centered outcome (PCO) measures to evaluate brain-computer interface (BCI) as an assistive technology (AT) for individuals with severe speech and physical impairments (SSPI). In Phase 1, 591 items from 15 existing measures were mapped to the International Classification of Functioning, Disability and Health (ICF). In Phase 2, qualitative interviews were conducted with eight people with SSPI and seven caregivers. Resulting text data were coded in an iterative analysis. Most items (79%) were mapped to the ICF environmental domain; over half (53%) were mapped to more than one domain. The ICF framework was well suited for mapping items related to body functions and structures, but less so for items in other areas, including personal factors. Two constructs emerged from qualitative data: quality of life (QOL) and AT. Component domains and themes were identified for each. Preliminary constructs, domains and themes were generated for future PCO measures relevant to BCI. Existing instruments are sufficient for initial items but do not adequately match the values of people with SSPI and their caregivers. Field methods for interviewing people with SSPI were successful, and support the inclusion of these individuals in PCO research. Implications for Rehabilitation Adapted interview methods allow people with severe speech and physical impairments to participate in patient-centered outcomes research. Patient-centered outcome measures are needed to evaluate the clinical implementation of brain-computer interface as an assistive technology.

  19. Computed tomography-guided core-needle biopsy of lung lesions: an oncology center experience

    Energy Technology Data Exchange (ETDEWEB)

    Guimaraes, Marcos Duarte; Fonte, Alexandre Calabria da; Chojniak, Rubens, E-mail: marcosduarte@yahoo.com.b [Hospital A.C. Camargo, Sao Paulo, SP (Brazil). Dept. of Radiology and Imaging Diagnosis; Andrade, Marcony Queiroz de [Hospital Alianca, Salvador, BA (Brazil); Gross, Jefferson Luiz [Hospital A.C. Camargo, Sao Paulo, SP (Brazil). Dept. of Chest Surgery

    2011-03-15

    Objective: The present study is aimed at describing the experience of an oncology center with computed tomography guided core-needle biopsy of pulmonary lesions. Materials and Methods: Retrospective analysis of 97 computed tomography-guided core-needle biopsy of pulmonary lesions performed in the period between 1996 and 2004 in a Brazilian reference oncology center (Hospital do Cancer - A.C. Camargo). Information regarding material appropriateness and the specific diagnoses were collected and analyzed. Results: Among 97 lung biopsies, 94 (96.9%) supplied appropriate specimens for histological analyses, with 71 (73.2%) cases being diagnosed as malignant lesions and 23 (23.7%) diagnosed as benign lesions. Specimens were inappropriate for analysis in three cases. The frequency of specific diagnosis was 83 (85.6%) cases, with high rates for both malignant lesions with 63 (88.7%) cases and benign lesions with 20 (86.7%). As regards complications, a total of 12 cases were observed as follows: 7 (7.2%) cases of hematoma, 3 (3.1%) cases of pneumothorax and 2 (2.1%) cases of hemoptysis. Conclusion: Computed tomography-guided core needle biopsy of lung lesions demonstrated high rates of material appropriateness and diagnostic specificity, and low rates of complications in the present study. (author)

  20. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  1. The effective use of virtualization for selection of data centers in a cloud computing environment

    Science.gov (United States)

    Kumar, B. Santhosh; Parthiban, Latha

    2018-04-01

    Data centers are the places which consist of network of remote servers to store, access and process the data. Cloud computing is a technology where users worldwide will submit the tasks and the service providers will direct the requests to the data centers which are responsible for execution of tasks. The servers in the data centers need to employ the virtualization concept so that multiple tasks can be executed simultaneously. In this paper we proposed an algorithm for data center selection based on energy of virtual machines created in server. The virtualization energy in each of the server is calculated and total energy of the data center is obtained by the summation of individual server energy. The tasks submitted are routed to the data center with least energy consumption which will result in minimizing the operational expenses of a service provider.

  2. Impact of configuration management system of computer center on support of scientific projects throughout their lifecycle

    International Nuclear Information System (INIS)

    Bogdanov, A.V.; Yuzhanin, N.V.; Zolotarev, V.I.; Ezhakova, T.R.

    2017-01-01

    In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is reviewed and development of the corresponding elements of the system is described in the present paper.

  3. The psychology of computer displays in the modern mission control center

    Science.gov (United States)

    Granaas, Michael M.; Rhea, Donald C.

    1988-01-01

    Work at NASA's Western Aeronautical Test Range (WATR) has demonstrated the need for increased consideration of psychological factors in the design of computer displays for the WATR mission control center. These factors include color perception, memory load, and cognitive processing abilities. A review of relevant work in the human factors psychology area is provided to demonstrate the need for this awareness. The information provided should be relevant in control room settings where computerized displays are being used.

  4. Computer-aided dispatch--traffic management center field operational test final detailed test plan : WSDOT deployment

    Science.gov (United States)

    2003-10-01

    The purpose of this document is to expand upon the evaluation components presented in "Computer-aided dispatch--traffic management center field operational test final evaluation plan : WSDOT deployment". This document defines the objective, approach,...

  5. Impact of configuration management system of computer center on support of scientific projects throughout their lifecycle

    Science.gov (United States)

    Bogdanov, A. V.; Iuzhanin, N. V.; Zolotarev, V. I.; Ezhakova, T. R.

    2017-12-01

    In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is being reviewed and development of the corresponding elements of the system is being described in the present paper.

  6. Building a Prototype of LHC Analysis Oriented Computing Centers

    Science.gov (United States)

    Bagliesi, G.; Boccali, T.; Della Ricca, G.; Donvito, G.; Paganoni, M.

    2012-12-01

    A Consortium between four LHC Computing Centers (Bari, Milano, Pisa and Trieste) has been formed in 2010 to prototype Analysis-oriented facilities for CMS data analysis, profiting from a grant from the Italian Ministry of Research. The Consortium aims to realize an ad-hoc infrastructure to ease the analysis activities on the huge data set collected at the LHC Collider. While “Tier2” Computing Centres, specialized in organized processing tasks like Monte Carlo simulation, are nowadays a well established concept, with years of running experience, site specialized towards end user chaotic analysis activities do not yet have a defacto standard implementation. In our effort, we focus on all the aspects that can make the analysis tasks easier for a physics user not expert in computing. On the storage side, we are experimenting on storage techniques allowing for remote data access and on storage optimization on the typical analysis access patterns. On the networking side, we are studying the differences between flat and tiered LAN architecture, also using virtual partitioning of the same physical networking for the different use patterns. Finally, on the user side, we are developing tools and instruments to allow for an exhaustive monitoring of their processes at the site, and for an efficient support system in case of problems. We will report about the results of the test executed on different subsystem and give a description of the layout of the infrastructure in place at the site participating to the consortium.

  7. Building a Prototype of LHC Analysis Oriented Computing Centers

    International Nuclear Information System (INIS)

    Bagliesi, G; Boccali, T; Della Ricca, G; Donvito, G; Paganoni, M

    2012-01-01

    A Consortium between four LHC Computing Centers (Bari, Milano, Pisa and Trieste) has been formed in 2010 to prototype Analysis-oriented facilities for CMS data analysis, profiting from a grant from the Italian Ministry of Research. The Consortium aims to realize an ad-hoc infrastructure to ease the analysis activities on the huge data set collected at the LHC Collider. While “Tier2” Computing Centres, specialized in organized processing tasks like Monte Carlo simulation, are nowadays a well established concept, with years of running experience, site specialized towards end user chaotic analysis activities do not yet have a defacto standard implementation. In our effort, we focus on all the aspects that can make the analysis tasks easier for a physics user not expert in computing. On the storage side, we are experimenting on storage techniques allowing for remote data access and on storage optimization on the typical analysis access patterns. On the networking side, we are studying the differences between flat and tiered LAN architecture, also using virtual partitioning of the same physical networking for the different use patterns. Finally, on the user side, we are developing tools and instruments to allow for an exhaustive monitoring of their processes at the site, and for an efficient support system in case of problems. We will report about the results of the test executed on different subsystem and give a description of the layout of the infrastructure in place at the site participating to the consortium.

  8. Computer-aided dispatch--traffic management center field operational test final test plans : state of Utah

    Science.gov (United States)

    2004-01-01

    The purpose of this document is to expand upon the evaluation components presented in "Computer-aided dispatch--traffic management center field operational test final evaluation plan : state of Utah". This document defines the objective, approach, an...

  9. New developments in delivering public access to data from the National Center for Computational Toxicology at the EPA

    Science.gov (United States)

    Researchers at EPA’s National Center for Computational Toxicology integrate advances in biology, chemistry, and computer science to examine the toxicity of chemicals and help prioritize chemicals for further research based on potential human health risks. The goal of this researc...

  10. CENTER CONDITIONS AND CYCLICITY FOR A FAMILY OF CUBIC SYSTEMS: COMPUTER ALGEBRA APPROACH.

    Science.gov (United States)

    Ferčec, Brigita; Mahdi, Adam

    2013-01-01

    Using methods of computational algebra we obtain an upper bound for the cyclicity of a family of cubic systems. We overcame the problem of nonradicality of the associated Bautin ideal by moving from the ring of polynomials to a coordinate ring. Finally, we determine the number of limit cycles bifurcating from each component of the center variety.

  11. Technical Data Management Center: a focal point for meteorological and other environmental transport computing technology

    International Nuclear Information System (INIS)

    McGill, B.; Maskewitz, B.F.; Trubey, D.K.

    1981-01-01

    The Technical Data Management Center, collecting, packaging, analyzing, and distributing information, computer technology and data which includes meteorological and other environmental transport work is located at the Oak Ridge National Laboratory, within the Engineering Physics Division. Major activities include maintaining a collection of computing technology and associated literature citations to provide capabilities for meteorological and environmental work. Details of the activities on behalf of TDMC's sponsoring agency, the US Nuclear Regulatory Commission, are described

  12. Bridging the digital divide by increasing computer and cancer literacy: community technology centers for head-start parents and families.

    Science.gov (United States)

    Salovey, Peter; Williams-Piehota, Pamela; Mowad, Linda; Moret, Marta Elisa; Edlund, Denielle; Andersen, Judith

    2009-01-01

    This article describes the establishment of two community technology centers affiliated with Head Start early childhood education programs focused especially on Latino and African American parents of children enrolled in Head Start. A 6-hour course concerned with computer and cancer literacy was presented to 120 parents and other community residents who earned a free, refurbished, Internet-ready computer after completing the program. Focus groups provided the basis for designing the structure and content of the course and modifying it during the project period. An outcomes-based assessment comparing program participants with 70 nonparticipants at baseline, immediately after the course ended, and 3 months later suggested that the program increased knowledge about computers and their use, knowledge about cancer and its prevention, and computer use including health information-seeking via the Internet. The creation of community computer technology centers requires the availability of secure space, capacity of a community partner to oversee project implementation, and resources of this partner to ensure sustainability beyond core funding.

  13. Examining the Fundamental Obstructs of Adopting Cloud Computing for 9-1-1 Dispatch Centers in the USA

    Science.gov (United States)

    Osman, Abdulaziz

    2016-01-01

    The purpose of this research study was to examine the unknown fears of embracing cloud computing which stretches across measurements like fear of change from leaders and the complexity of the technology in 9-1-1 dispatch centers in USA. The problem that was addressed in the study was that many 9-1-1 dispatch centers in USA are still using old…

  14. Teaching Scientific Computing: A Model-Centered Approach to Pipeline and Parallel Programming with C

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2015-01-01

    Full Text Available The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same time, the topic is among the most motivating tasks due to the comprehensive multidisciplinary and technical requirements. To enhance the educational process, the paper proposes a novel model-centered framework and develops the relevant learning objects. It allows implementing an educational platform of constructivist learning process, thus enabling learners’ experimentation with the provided programming models, obtaining learners’ competences of the modern scientific research and computational thinking, and capturing the relevant technical knowledge. It also provides an integral platform that allows a simultaneous and comparative introduction to pipelining and parallel computing. The programming language C for developing programming models and message passing interface (MPI and OpenMP parallelization tools have been chosen for implementation.

  15. Annual report of R and D activities in Center for Promotion of Computational Science and Engineering and Center for Computational Science and e-Systems from April 1, 2005 to March 31, 2006

    International Nuclear Information System (INIS)

    2007-03-01

    This report provides an overview of research and development activities in Center for Computational Science and Engineering (CCSE), JAERI in the former half of the fiscal year 2005 (April 1, 2005 - Sep. 30, 2006) and those in Center for Computational Science and e-Systems (CCSE), JAEA, in the latter half of the fiscal year 2005(Oct 1, 2005 - March 31, 2006). In the former half term, the activities have been performed by 5 research groups, Research Group for Computational Science in Atomic Energy, Research Group for Computational Material Science in Atomic Energy, R and D Group for Computer Science, R and D Group for Numerical Experiments, and Quantum Bioinformatics Group in CCSE. At the beginning of the latter half term, these 5 groups were integrated into two offices, Simulation Technology Research and Development Office and Computer Science Research and Development Office at the moment of the unification of JNC (Japan Nuclear Cycle Development Institute) and JAERI (Japan Atomic Energy Research Institute), and the latter-half term activities were operated by the two offices. A big project, ITBL (Information Technology Based Laboratory) project and fundamental computational research for atomic energy plant were performed mainly by two groups, the R and D Group for Computer Science and the Research Group for Computational Science in Atomic Energy in the former half term and their integrated office, Computer Science Research and Development Office in the latter half one, respectively. The main result was verification by using structure analysis for real plant executable on the Grid environment, and received Honorable Mentions of Analytic Challenge in the conference 'Supercomputing (SC05)'. The materials science and bioinformatics in atomic energy research field were carried out by three groups, Research Group for Computational Material Science in Atomic Energy, R and D Group for Computer Science, R and D Group for Numerical Experiments, and Quantum Bioinformatics

  16. CNC Turning Center Operations and Prove Out. Computer Numerical Control Operator/Programmer. 444-334.

    Science.gov (United States)

    Skowronski, Steven D.

    This student guide provides materials for a course designed to instruct the student in the recommended procedures used when setting up tooling and verifying part programs for a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 discusses course content and reviews and demonstrates set-up procedures…

  17. Radiation Shielding Information Center: a source of computer codes and data for fusion neutronics studies

    International Nuclear Information System (INIS)

    McGill, B.L.; Roussin, R.W.; Trubey, D.K.; Maskewitz, B.F.

    1980-01-01

    The Radiation Shielding Information Center (RSIC), established in 1962 to collect, package, analyze, and disseminate information, computer codes, and data in the area of radiation transport related to fission, is now being utilized to support fusion neutronics technology. The major activities include: (1) answering technical inquiries on radiation transport problems, (2) collecting, packaging, testing, and disseminating computing technology and data libraries, and (3) reviewing literature and operating a computer-based information retrieval system containing material pertinent to radiation transport analysis. The computer codes emphasize methods for solving the Boltzmann equation such as the discrete ordinates and Monte Carlo techniques, both of which are widely used in fusion neutronics. The data packages include multigroup coupled neutron-gamma-ray cross sections and kerma coefficients, other nuclear data, and radiation transport benchmark problem results

  18. BWR modeling capability and Scale/Triton lattice-to-core integration of the Nestle nodal simulator - 331

    International Nuclear Information System (INIS)

    Galloway, J.; Hernandez, H.; Maldonado, G.I.; Jessee, M.; Popov, E.; Clarno, K.

    2010-01-01

    This article reports the status of recent and substantial enhancements made to the NESTLE nodal core simulator, a code originally developed at North Carolina State University (NCSU) of which version 5.2.1 has been available for several years through the Oak Ridge National Laboratory (ORNL) Radiation Safety Information Computational Center (RSICC) software repository. In its released and available form, NESTLE is a seasoned, well-developed and extensively tested code system particularly useful to model PWRs. In collaboration with NCSU, University of Tennessee (UT) and ORNL researchers have recently developed new enhancements for the NESTLE code, including the implementation of a two-phase drift-flux thermal hydraulic and flow redistribution model to facilitate modeling of Boiling Water Reactors (BWRs) as well as the development of an integrated coupling of SCALE/TRITON lattice physics to NESTLE so to produce an end-to-end capability for reactor simulations. These latest advancements implemented into NESTLE as well as an update of other ongoing efforts of this project are herein reported. (authors)

  19. Polymer waveguides for electro-optical integration in data centers and high-performance computers.

    Science.gov (United States)

    Dangel, Roger; Hofrichter, Jens; Horst, Folkert; Jubin, Daniel; La Porta, Antonio; Meier, Norbert; Soganci, Ibrahim Murat; Weiss, Jonas; Offrein, Bert Jan

    2015-02-23

    To satisfy the intra- and inter-system bandwidth requirements of future data centers and high-performance computers, low-cost low-power high-throughput optical interconnects will become a key enabling technology. To tightly integrate optics with the computing hardware, particularly in the context of CMOS-compatible silicon photonics, optical printed circuit boards using polymer waveguides are considered as a formidable platform. IBM Research has already demonstrated the essential silicon photonics and interconnection building blocks. A remaining challenge is electro-optical packaging, i.e., the connection of the silicon photonics chips with the system. In this paper, we present a new single-mode polymer waveguide technology and a scalable method for building the optical interface between silicon photonics chips and single-mode polymer waveguides.

  20. Computer modeling with randomized-controlled trial data informs the development of person-centered aged care homes.

    Science.gov (United States)

    Chenoweth, Lynn; Vickland, Victor; Stein-Parbury, Jane; Jeon, Yun-Hee; Kenny, Patricia; Brodaty, Henry

    2015-10-01

    To answer questions on the essential components (services, operations and resources) of a person-centered aged care home (iHome) using computer simulation. iHome was developed with AnyLogic software using extant study data obtained from 60 Australian aged care homes, 900+ clients and 700+ aged care staff. Bayesian analysis of simulated trial data will determine the influence of different iHome characteristics on care service quality and client outcomes. Interim results: A person-centered aged care home (socio-cultural context) and care/lifestyle services (interactional environment) can produce positive outcomes for aged care clients (subjective experiences) in the simulated environment. Further testing will define essential characteristics of a person-centered care home.

  1. Computer Center: Software Review.

    Science.gov (United States)

    Duhrkopf, Richard, Ed.; Belshe, John F., Ed.

    1988-01-01

    Reviews a software package, "Mitosis-Meiosis," available for Apple II or IBM computers with colorgraphics capabilities. Describes the documentation, presentation and flexibility of the program. Rates the program based on graphics and usability in a biology classroom. (CW)

  2. Computational Physics Program of the National MFE Computer Center

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1984-12-01

    The principal objective of the computational physics group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. A summary of the groups activities is presented, including computational studies in MHD equilibria and stability, plasma transport, Fokker-Planck, and efficient numerical and programming algorithms. References are included

  3. Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    OpenAIRE

    Buyya, Rajkumar; Beloglazov, Anton; Abawajy, Jemal

    2010-01-01

    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational cos...

  4. M-center growth in alkali halides: computer simulation

    International Nuclear Information System (INIS)

    Aguilar, M.; Jaque, F.; Agullo-Lopez, F.

    1983-01-01

    The heterogeneous interstitial nucleation model previously proposed to explain F-center growth curves in irradiated alkali halides has been extended to account for M-center kinetics. The interstitials produced during the primary irradiation event are assumed to be trapped at impurities and interstitial clusters or recombine with F and M centers. For M-center formation two cases have been considered: (a) diffusion and aggregation of F centers, and (b) statistical generation and pairing of F centers. Process (b) is the only one consistent with the quadratic relationship between M and F center concentrations. However, to account for the F/M ratios experimentally observed as well as for the role of dose-rate, a modified statistical model involving random creation and association of F + -F pairs has been shown to be adequate. (author)

  5. Bioinformatics and Computational Core Technology Center

    Data.gov (United States)

    Federal Laboratory Consortium — SERVICES PROVIDED BY THE COMPUTER CORE FACILITYEvaluation, purchase, set up, and maintenance of the computer hardware and network for the 170 users in the research...

  6. THE CENTER FOR DATA INTENSIVE COMPUTING

    Energy Technology Data Exchange (ETDEWEB)

    GLIMM,J.

    2002-11-01

    CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

  7. THE CENTER FOR DATA INTENSIVE COMPUTING

    Energy Technology Data Exchange (ETDEWEB)

    GLIMM,J.

    2001-11-01

    CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

  8. THE CENTER FOR DATA INTENSIVE COMPUTING

    International Nuclear Information System (INIS)

    GLIMM, J.

    2001-01-01

    CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook

  9. THE CENTER FOR DATA INTENSIVE COMPUTING

    Energy Technology Data Exchange (ETDEWEB)

    GLIMM,J.

    2003-11-01

    CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

  10. Development of a computer system at La Hague center

    International Nuclear Information System (INIS)

    Mimaud, Robert; Malet, Georges; Ollivier, Francis; Fabre, J.-C.; Valois, Philippe; Desgranges, Patrick; Anfossi, Gilbert; Gentizon, Michel; Serpollet, Roger.

    1977-01-01

    The U.P.2 plant, built at La Hague Center is intended mainly for the reprocessing of spent fuels coming from (as metal) graphite-gas reactors and (as oxide) light-water, heavy-water and breeder reactors. In each of the five large nuclear units the digital processing of measurements was dealt with until 1974 by CAE 3030 data processors. During the period 1974-1975 a modern industrial computer system was set up. This system, equipped with T 2000/20 material from the Telemecanique company, consists of five measurement acquisition devices (for a total of 1500 lines processed) and two central processing units (CPU). The connection of these two PCU (Hardware and Software) enables an automatic connection of the system either on the first CPU or on the second one. The system covers, at present, data processing, threshold monitoring, alarm systems, display devices, periodical listing, and specific calculations concerning the process (balances etc), and at a later stage, an automatic control of certain units of the Process [fr

  11. Abstracts of digital computer code packages assembled by the Radiation Shielding Information Center

    International Nuclear Information System (INIS)

    Carter, B.J.; Maskewitz, B.F.

    1985-04-01

    This publication, ORNL/RSIC-13, Volumes I to III Revised, has resulted from an internal audit of the first 168 packages of computing technology in the Computer Codes Collection (CCC) of the Radiation Shielding Information Center (RSIC). It replaces the earlier three documents published as single volumes between 1966 to 1972. A significant number of the early code packages were considered to be obsolete and were removed from the collection in the audit process and the CCC numbers were not reassigned. Others not currently being used by the nuclear R and D community were retained in the collection to preserve technology not replaced by newer methods, or were considered of potential value for reference purposes. Much of the early technology, however, has improved through developer/RSIC/user interaction and continues at the forefront of the advancing state-of-the-art

  12. Abstracts of digital computer code packages assembled by the Radiation Shielding Information Center

    Energy Technology Data Exchange (ETDEWEB)

    Carter, B.J.; Maskewitz, B.F.

    1985-04-01

    This publication, ORNL/RSIC-13, Volumes I to III Revised, has resulted from an internal audit of the first 168 packages of computing technology in the Computer Codes Collection (CCC) of the Radiation Shielding Information Center (RSIC). It replaces the earlier three documents published as single volumes between 1966 to 1972. A significant number of the early code packages were considered to be obsolete and were removed from the collection in the audit process and the CCC numbers were not reassigned. Others not currently being used by the nuclear R and D community were retained in the collection to preserve technology not replaced by newer methods, or were considered of potential value for reference purposes. Much of the early technology, however, has improved through developer/RSIC/user interaction and continues at the forefront of the advancing state-of-the-art.

  13. A hypothesis on the formation of the primary ossification centers in the membranous neurocranium: a mathematical and computational model.

    Science.gov (United States)

    Garzón-Alvarado, Diego A

    2013-01-21

    This article develops a model of the appearance and location of the primary centers of ossification in the calvaria. The model uses a system of reaction-diffusion equations of two molecules (BMP and Noggin) whose behavior is of type activator-substrate and its solution produces Turing patterns, which represents the primary ossification centers. Additionally, the model includes the level of cell maturation as a function of the location of mesenchymal cells. Thus the mature cells can become osteoblasts due to the action of BMP2. Therefore, with this model, we can have two frontal primary centers, two parietal, and one, two or more occipital centers. The location of these centers in the simplified computational model is highly consistent with those centers found at an embryonic level. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Computational-physics program of the National MFE Computer Center

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1982-02-01

    The computational physics group is ivolved in several areas of fusion research. One main area is the application of multidimensional Fokker-Planck, transport and combined Fokker-Planck/transport codes to both toroidal and mirror devices. Another major area is the investigation of linear and nonlinear resistive magnetohydrodynamics in two and three dimensions, with applications to all types of fusion devices. The MHD work is often coupled with the task of numerically generating equilibria which model experimental devices. In addition to these computational physics studies, investigations of more efficient numerical algorithms are being carried out

  15. The Benefits of Making Data from the EPA National Center for Computational Toxicology available for reuse (ACS Fall meeting 3 of 12)

    Science.gov (United States)

    Researchers at EPA’s National Center for Computational Toxicology (NCCT) integrate advances in biology, chemistry, exposure and computer science to help prioritize chemicals for further research based on potential human health risks. The goal of this research is to quickly evalua...

  16. Outline of computer application in PNC

    International Nuclear Information System (INIS)

    Aoki, Minoru

    1990-01-01

    Computer application systems are an important resource for the R and D (research and development) in PNC. Various types of computer systems are widely used on the R and D of experiment, evaluation and analysis, plant operation and other jobs in PNC. Currently, the computer centers in PNC have been established in Oarai engineering Center and Tokai Works. The former uses a large scale digital computer and supercomputer systems. The latter uses only a large scale digital computer system. These computer systems have joined in the PNC Information Network that connects between Head Office and Branches, Oarai, Tokai, Ningyotoge and Fugen, by means of super digital circuit. In the near future, the computer centers will be brought together in order to raise up efficiency of operation of the computer systems. New computer center called 'Information Center' is under construction in Oarai Engineering Center. (author)

  17. Risk factors for computer visual syndrome (CVS) among operators of two call centers in São Paulo, Brazil.

    Science.gov (United States)

    Sa, Eduardo Costa; Ferreira Junior, Mario; Rocha, Lys Esther

    2012-01-01

    The aims of this study were to investigate work conditions, to estimate the prevalence and to describe risk factors associated with Computer Vision Syndrome among two call centers' operators in São Paulo (n = 476). The methods include a quantitative cross-sectional observational study and an ergonomic work analysis, using work observation, interviews and questionnaires. The case definition was the presence of one or more specific ocular symptoms answered as always, often or sometimes. The multiple logistic regression model, were created using the stepwise forward likelihood method and remained the variables with levels below 5% (p vision (43.5%). The prevalence of Computer Vision Syndrome was 54.6%. Associations verified were: being female (OR 2.6, 95% CI 1.6 to 4.1), lack of recognition at work (OR 1.4, 95% CI 1.1 to 1.8), organization of work in call center (OR 1.4, 95% CI 1.1 to 1.7) and high demand at work (OR 1.1, 95% CI 1.0 to 1.3). The organization and psychosocial factors at work should be included in prevention programs of visual syndrome among call centers' operators.

  18. The computational physics program of the National MFE Computer Center

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1988-01-01

    The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generation of supercomputers. The computational physics group is involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to compact toroids. Another major area is the investigation of kinetic instabilities using a 3-D particle code. This work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence are being examined. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers

  19. Threat and vulnerability analysis and conceptual design of countermeasures for a computer center under construction

    International Nuclear Information System (INIS)

    Rozen, A.; Musacchio, J.M.

    1988-01-01

    This project involved the assessment of a new computer center to be used as the main national data processing facility of a large European bank. This building serves as the principal facility in the country with all other branches utilizing the data processing center. As such, the building is a crucial target which may attract terrorist attacks. Threat and vulnerability assessments were performed as a basis to define and overall fully-integrated security system of passive and active countermeasures for the facility. After separately assessing the range of threats and vulnerabilities, a combined matrix of threats and vulnerabilities was used to identify the crucial combinations. A set of architectural-structural passive measures was added to the active components of the security system

  20. Computed tomography evaluation of rotary systems on the root canal transportation and centering ability

    Directory of Open Access Journals (Sweden)

    André PAGLIOSA

    2015-01-01

    Full Text Available Abstract : The endodontic preparation of curved and narrow root canals is challenging, with a tendency for the prepared canal to deviate away from its natural axis. The aim of this study was to evaluate, by cone-beam computed tomography, the transportation and centering ability of curved mesiobuccal canals in maxillary molars after biomechanical preparation with different nickel-titanium (NiTi rotary systems. Forty teeth with angles of curvature ranging from 20° to 40° and radii between 5.0 mm and 10.0 mm were selected and assigned into four groups (n = 10, according to the biomechanical preparative system used: Hero 642 (HR, Liberator (LB, ProTaper (PT, and Twisted File (TF. The specimens were inserted into an acrylic device and scanned with computed tomography prior to, and following, instrumentation at 3, 6 and 9 mm from the root apex. The canal degree of transportation and centering ability were calculated and analyzed using one-way ANOVA and Tukey’s tests (α = 0.05. The results demonstrated no significant difference (p > 0.05 in shaping ability among the rotary systems. The mean canal transportation was: -0.049 ± 0.083 mm (HR; -0.004 ± 0.044 mm (LB; -0.003 ± 0.064 mm (PT; -0.021 ± 0.064 mm (TF. The mean canal centering ability was: -0.093 ± 0.147 mm (HR; -0.001 ± 0.100 mm (LB; -0.002 ± 0.134 mm (PT; -0.033 ± 0.133 mm (TF. Also, there was no significant difference among the root segments (p > 0.05. It was concluded that the Hero 642, Liberator, ProTaper, and Twisted File rotary systems could be safely used in curved canal instrumentation, resulting in satisfactory preservation of the original canal shape.

  1. Computed tomography evaluation of rotary systems on the root canal transportation and centering ability

    International Nuclear Information System (INIS)

    Pagliosa, Andre; Raucci-Neto, Walter; Silva-Souza, Yara Teresinha Correa; Alfredo, Edson; Sousa-Neto, Manoel Damiao; Versiani, Marco Aurelio

    2015-01-01

    The endodontic preparation of curved and narrow root canals is challenging, with a tendency for the prepared canal to deviate away from its natural axis. The aim of this study was to evaluate, by cone-beam computed tomography, the transportation and centering ability of curved mesiobuccal canals in maxillary molars after biomechanical preparation with different nickel-titanium (NiTi) rotary systems. Forty teeth with angles of curvature ranging from 20° to 40° and radii between 5.0 mm and 10.0 mm were selected and assigned into four groups (n = 10), according to the biomechanical preparative system used: Hero 642 (HR), Liberator (LB), ProTaper (PT), and Twisted File (TF). The specimens were inserted into an acrylic device and scanned with computed tomography prior to, and following, instrumentation at 3, 6 and 9 mm from the root apex. The canal degree of transportation and centering ability were calculated and analyzed using one-way ANOVA and Tukey’s tests (α = 0.05). The results demonstrated no significant difference (p > 0.05) in shaping ability among the rotary systems. The mean canal transportation was: -0.049 ± 0.083 mm (HR); -0.004 ± 0.044 mm (LB); -0.003 ± 0.064 mm (PT); -0.021 ± 0.064 mm (TF). The mean canal centering ability was: -0.093 ± 0.147 mm (HR); -0.001 ± 0.100 mm (LB); -0.002 ± 0.134 mm (PT); -0.033 ± 0.133 mm (TF). Also, there was no significant difference among the root segments (p > 0.05). It was concluded that the Hero 642, Liberator, ProTaper, and Twisted File rotary systems could be safely used in curved canal instrumentation, resulting in satisfactory preservation of the original canal shape. (author)

  2. Computed tomography evaluation of rotary systems on the root canal transportation and centering ability

    Energy Technology Data Exchange (ETDEWEB)

    Pagliosa, Andre; Raucci-Neto, Walter; Silva-Souza, Yara Teresinha Correa; Alfredo, Edson, E-mail: ysousa@unaerp.br [Universidade de Ribeirao Preto (UNAERP), SP (Brazil). Fac. de Odontologia; Sousa-Neto, Manoel Damiao; Versiani, Marco Aurelio [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil). Fac. de Odoentologia

    2015-03-01

    The endodontic preparation of curved and narrow root canals is challenging, with a tendency for the prepared canal to deviate away from its natural axis. The aim of this study was to evaluate, by cone-beam computed tomography, the transportation and centering ability of curved mesiobuccal canals in maxillary molars after biomechanical preparation with different nickel-titanium (NiTi) rotary systems. Forty teeth with angles of curvature ranging from 20° to 40° and radii between 5.0 mm and 10.0 mm were selected and assigned into four groups (n = 10), according to the biomechanical preparative system used: Hero 642 (HR), Liberator (LB), ProTaper (PT), and Twisted File (TF). The specimens were inserted into an acrylic device and scanned with computed tomography prior to, and following, instrumentation at 3, 6 and 9 mm from the root apex. The canal degree of transportation and centering ability were calculated and analyzed using one-way ANOVA and Tukey’s tests (α = 0.05). The results demonstrated no significant difference (p > 0.05) in shaping ability among the rotary systems. The mean canal transportation was: -0.049 ± 0.083 mm (HR); -0.004 ± 0.044 mm (LB); -0.003 ± 0.064 mm (PT); -0.021 ± 0.064 mm (TF). The mean canal centering ability was: -0.093 ± 0.147 mm (HR); -0.001 ± 0.100 mm (LB); -0.002 ± 0.134 mm (PT); -0.033 ± 0.133 mm (TF). Also, there was no significant difference among the root segments (p > 0.05). It was concluded that the Hero 642, Liberator, ProTaper, and Twisted File rotary systems could be safely used in curved canal instrumentation, resulting in satisfactory preservation of the original canal shape. (author)

  3. Certification of version 1.2 of the PORFLO-3 code for the WHC scientific and engineering computational center

    International Nuclear Information System (INIS)

    Kline, N.W.

    1994-01-01

    Version 1.2 of the PORFLO-3 Code has migrated from the Hanford Cray computer to workstations in the WHC Scientific and Engineering Computational Center. The workstation-based configuration and acceptance testing are inherited from the CRAY-based configuration. The purpose of this report is to document differences in the new configuration as compared to the parent Cray configuration, and summarize some of the acceptance test results which have shown that the migrated code is functioning correctly in the new environment

  4. Radiation transport Part B: Applications with examples

    International Nuclear Information System (INIS)

    Beutler, D.E.

    1997-01-01

    In the previous sections Len Lorence has described the need, theory, and types of radiation codes that can be applied to model the results of radiation effects tests or working environments for electronics. For the rest of this segment, the author will concentrate on the specific ways the codes can be used to predict device response or analyze radiation test results. Regardless of whether one is predicting responses in a working or test environment, the procedures are virtually the same. The same can be said for the use of 1-, 2-, or 3-dimensional codes and Monte Carlo or discrete ordinates codes. No attempt is made to instruct the student on the specifics of the code. For example, the author will not discuss the details, such as the number of meshes, energy groups, etc. that are appropriate for a discrete ordinates code. For the sake of simplicity, he will restrict himself to the 1-dimensional code CEPXS/ONELD. This code along with a wide variety of other radiation codes can be obtained form the Radiation Safety Information Computational Center (RSICC) for a nominal handling fee

  5. Spectrum of tablet computer use by medical students and residents at an academic medical center

    Directory of Open Access Journals (Sweden)

    Robert Robinson

    2015-07-01

    Full Text Available Introduction. The value of tablet computer use in medical education is an area of considerable interest, with preliminary investigations showing that the majority of medical trainees feel that tablet computers added value to the curriculum. This study investigated potential differences in tablet computer use between medical students and resident physicians.Materials & Methods. Data collection for this survey was accomplished with an anonymous online questionnaire shared with the medical students and residents at Southern Illinois University School of Medicine (SIU-SOM in July and August of 2012.Results. There were 76 medical student responses (26% response rate and 66 resident/fellow responses to this survey (21% response rate. Residents/fellows were more likely to use tablet computers several times daily than medical students (32% vs. 20%, p = 0.035. The most common reported uses were for accessing medical reference applications (46%, e-Books (45%, and board study (32%. Residents were more likely than students to use a tablet computer to access an electronic medical record (41% vs. 21%, p = 0.010, review radiology images (27% vs. 12%, p = 0.019, and enter patient care orders (26% vs. 3%, p < 0.001.Discussion. This study shows a high prevalence and frequency of tablet computer use among physicians in training at this academic medical center. Most residents and students use tablet computers to access medical references, e-Books, and to study for board exams. Residents were more likely to use tablet computers to complete clinical tasks.Conclusions. Tablet computer use among medical students and resident physicians was common in this survey. All learners used tablet computers for point of care references and board study. Resident physicians were more likely to use tablet computers to access the EMR, enter patient care orders, and review radiology studies. This difference is likely due to the differing educational and professional demands placed on

  6. Pain, Work-related Characteristics, and Psychosocial Factors among Computer Workers at a University Center.

    Science.gov (United States)

    Mainenti, Míriam Raquel Meira; Felicio, Lilian Ramiro; Rodrigues, Erika de Carvalho; Ribeiro da Silva, Dalila Terrinha; Vigário Dos Santos, Patrícia

    2014-04-01

    [Purpose] Complaint of pain is common in computer workers, encouraging the investigation of pain-related workplace factors. This study investigated the relationship among work-related characteristics, psychosocial factors, and pain among computer workers from a university center. [Subjects and Methods] Fifteen subjects (median age, 32.0 years; interquartile range, 26.8-34.5 years) were subjected to measurement of bioelectrical impedance; photogrammetry; workplace measurements; and pain complaint, quality of life, and motivation questionnaires. [Results] The low back was the most prevalent region of complaint (76.9%). The number of body regions for which subjects complained of pain was greater in the no rest breaks group, which also presented higher prevalences of neck (62.5%) and low back (100%) pain. There were also observed associations between neck complaint and quality of life; neck complaint and head protrusion; wrist complaint and shoulder angle; and use of a chair back and thoracic pain. [Conclusion] Complaint of pain was associated with no short rest breaks, no use of a chair back, poor quality of life, high head protrusion, and shoulder angle while using the mouse of a computer.

  7. Lecture 4: Cloud Computing in Large Computer Centers

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    This lecture will introduce Cloud Computing concepts identifying and analyzing its characteristics, models, and applications. Also, you will learn how CERN built its Cloud infrastructure and which tools are been used to deploy and manage it. About the speaker: Belmiro Moreira is an enthusiastic software engineer passionate about the challenges and complexities of architecting and deploying Cloud Infrastructures in ve...

  8. An Analysis of Cloud Computing with Amazon Web Services for the Atmospheric Science Data Center

    Science.gov (United States)

    Gleason, J. L.; Little, M. M.

    2013-12-01

    NASA science and engineering efforts rely heavily on compute and data handling systems. The nature of NASA science data is such that it is not restricted to NASA users, instead it is widely shared across a globally distributed user community including scientists, educators, policy decision makers, and the public. Therefore NASA science computing is a candidate use case for cloud computing where compute resources are outsourced to an external vendor. Amazon Web Services (AWS) is a commercial cloud computing service developed to use excess computing capacity at Amazon, and potentially provides an alternative to costly and potentially underutilized dedicated acquisitions whenever NASA scientists or engineers require additional data processing. AWS desires to provide a simplified avenue for NASA scientists and researchers to share large, complex data sets with external partners and the public. AWS has been extensively used by JPL for a wide range of computing needs and was previously tested on a NASA Agency basis during the Nebula testing program. Its ability to support the Langley Science Directorate needs to be evaluated by integrating it with real world operational needs across NASA and the associated maturity that would come with that. The strengths and weaknesses of this architecture and its ability to support general science and engineering applications has been demonstrated during the previous testing. The Langley Office of the Chief Information Officer in partnership with the Atmospheric Sciences Data Center (ASDC) has established a pilot business interface to utilize AWS cloud computing resources on a organization and project level pay per use model. This poster discusses an effort to evaluate the feasibility of the pilot business interface from a project level perspective by specifically using a processing scenario involving the Clouds and Earth's Radiant Energy System (CERES) project.

  9. Changing the batch system in a Tier 1 computing center: why and how

    Science.gov (United States)

    Chierici, Andrea; Dal Pra, Stefano

    2014-06-01

    At the Italian Tierl Center at CNAF we are evaluating the possibility to change the current production batch system. This activity is motivated mainly because we are looking for a more flexible licensing model as well as to avoid vendor lock-in. We performed a technology tracking exercise and among many possible solutions we chose to evaluate Grid Engine as an alternative because its adoption is increasing in the HEPiX community and because it's supported by the EMI middleware that we currently use on our computing farm. Another INFN site evaluated Slurm and we will compare our results in order to understand pros and cons of the two solutions. We will present the results of our evaluation of Grid Engine, in order to understand if it can fit the requirements of a Tier 1 center, compared to the solution we adopted long ago. We performed a survey and a critical re-evaluation of our farming infrastructure: many production softwares (accounting and monitoring on top of all) rely on our current solution and changing it required us to write new wrappers and adapt the infrastructure to the new system. We believe the results of this investigation can be very useful to other Tier-ls and Tier-2s centers in a similar situation, where the effort of switching may appear too hard to stand. We will provide guidelines in order to understand how difficult this operation can be and how long the change may take.

  10. Human factors in computing systems: focus on patient-centered health communication at the ACM SIGCHI conference.

    Science.gov (United States)

    Wilcox, Lauren; Patel, Rupa; Chen, Yunan; Shachak, Aviv

    2013-12-01

    Health Information Technologies, such as electronic health records (EHR) and secure messaging, have already transformed interactions among patients and clinicians. In addition, technologies supporting asynchronous communication outside of clinical encounters, such as email, SMS, and patient portals, are being increasingly used for follow-up, education, and data reporting. Meanwhile, patients are increasingly adopting personal tools to track various aspects of health status and therapeutic progress, wishing to review these data with clinicians during consultations. These issues have drawn increasing interest from the human-computer interaction (HCI) community, with special focus on critical challenges in patient-centered interactions and design opportunities that can address these challenges. We saw this community presenting and interacting at the ACM SIGCHI 2013, Conference on Human Factors in Computing Systems, (also known as CHI), held April 27-May 2nd, 2013 at the Palais de Congrès de Paris in France. CHI 2013 featured many formal avenues to pursue patient-centered health communication: a well-attended workshop, tracks of original research, and a lively panel discussion. In this report, we highlight these events and the main themes we identified. We hope that it will help bring the health care communication and the HCI communities closer together. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. The Internet and Computer User Profile: a questionnaire for determining intervention targets in occupational therapy at mental health vocational centers.

    Science.gov (United States)

    Regev, Sivan; Hadas-Lidor, Noami; Rosenberg, Limor

    2016-08-01

    In this study, the assessment tool "Internet and Computer User Profile" questionnaire (ICUP) is presented and validated. It was developed in order to gather information for setting intervention goals to meet current demands. Sixty-eight subjects aged 23-68 participated in the study. The study group (n = 28) was sampled from two vocational centers. The control group consisted of 40 participants from the general population that were sampled by convenience sampling based on the demographics of the study group. Subjects from both groups answered the ICUP questionnaire. Subjects of the study group answered the General Self- Efficacy (GSE) questionnaire and performed the Assessment of Computer Task Performance (ACTP) test in order to examine the convergent validity of the ICUP. Twenty subjects from both groups retook the ICUP questionnaire in order to obtain test-retest results. Differences between groups were tested using multiple analysis of variance (MANOVA) tests. Pearson and Spearman's tests were used for calculating correlations. Cronbach's alpha coefficient and k equivalent were used to assess internal consistency. The results indicate that the questionnaire is valid and reliable. They emphasize that the layout of the ICUP items facilitates in making a comprehensive examination of the client's perception regarding his participation in computer and internet activities. Implications for Rehabiliation The assessment tool "Internet and Computer User Profile" (ICUP) questionnaire is a novel assessment tool that evaluates operative use and individual perception of computer activities. The questionnaire is valid and reliable for use with participants of vocational centers dealing with mental illness. It is essential to facilitate access to computers for people with mental illnesses, seeing that they express similar interest in computers and internet as people from the general population of the same age. Early intervention will be particularly effective for young

  12. Data Center Consolidation: A Step towards Infrastructure Clouds

    Science.gov (United States)

    Winter, Markus

    Application service providers face enormous challenges and rising costs in managing and operating a growing number of heterogeneous system and computing landscapes. Limitations of traditional computing environments force IT decision-makers to reorganize computing resources within the data center, as continuous growth leads to an inefficient utilization of the underlying hardware infrastructure. This paper discusses a way for infrastructure providers to improve data center operations based on the findings of a case study on resource utilization of very large business applications and presents an outlook beyond server consolidation endeavors, transforming corporate data centers into compute clouds.

  13. The Soviet center of astronomical data

    International Nuclear Information System (INIS)

    Dluzhnevskaya, O.B.

    1982-01-01

    On the basis of the current French-Soviet cooperation in science and technology, the Astronomical Council of the U.S.S.R. Academy of Sciences and the Strasbourg Center signed in 1977 an agreement on setting up the Soviet Center of Astronomical Data as its filial branch. The Soviet Center was created on the basis of a computation center at the Zvenigorod station of the Astronomical Council of the U.S.S.R. Academy of Sciences, which had already had considerable experience of working with stellar catalogues. In 1979 the Center was equipped with a EC-1033 computer. In 1978-1979 the Soviet Center of Astronomical Data (C.A.D.) received from Strasbourg 96 of the most important catalogues. By September 1981 the list of catalogues available at the Soviet Center has reached 140 catalogues some of which are described. (Auth.)

  14. New MCNPX developments

    Energy Technology Data Exchange (ETDEWEB)

    Hendricks, J. S. (John S.); McKinney, G. W. (Gregg W.); Waters, L. S. (Laurie S.); Hughes, H. G. (Henry Grady); Snow, E. C. (Edward Clark)

    2002-01-01

    The Los Alamos National Laboratory Monte Carlo N-Particle extended (MCNPX) radiation transport code has been upgraded significantly to Version MCNPX2.4.0. It is now based on the latest MCNP4C3 and MCNPX2.3.0 releases to the Radiation Safety Information Computational Center (RSICC). In addition to all of the advances from earlier versions of MCNP and MCNPX, important new capabilities have been developed. The Monte Carlo method was developed at Los Alamos National Laboratory during the Manhattan Project in the early 1940s. MCNP and MCNPX are heirs to those early efforts. Over 400 person-years have been invested in the research, development, programming, documentation, and databases for these codes. MCNP is a general-purpose neutron (0-MeV to 20-MeV), photon (1-keV to 1-GeV), and electron (1-keV to 1-GeV) transport code for calculating *MCNPX, MCNP, LAHET, and LCS are trademarks of the Regents of the University of California, Los Alamos National Laboratory. the time-dependent, continuous-energy transport of these particles in three-dimensional geometries. MCNP is perhaps the most widely used and well-known physics simulation code in the world today. MCNPX extends MCNP to track nearly all particles at all energies. MCNPX combined MCNP and the LAHET Code System (LCS). LCS is based on the Oak Ridge High Energy Transport Code. LCS uses models for particles in physics regimes where there are no tabulated data, including the Bertini and ISABEL models. MCNPX has additional models to LCS, such as the CEM model. MCNPX2.3.0 was released to RSICC in December 2001 and is based on MCNP4B. The principal features of MCNPX2.3.0 are (1) Physics for 34 particle types; (2) High-energy physics above the giga-electron volt range; (3) Neutron, proton, and photonuclear 150-MeV libraries: (4) Photonuclear physics; (5) Mesh tallies; (6) Radiography tallies; (7) Secondary particle production biasing; (8) VAVILOV energy straggling for charged particles; and (9) Automatic configuration for

  15. Computing at Stanford.

    Science.gov (United States)

    Feigenbaum, Edward A.; Nielsen, Norman R.

    1969-01-01

    This article provides a current status report on the computing and computer science activities at Stanford University, focusing on the Computer Science Department, the Stanford Computation Center, the recently established regional computing network, and the Institute for Mathematical Studies in the Social Sciences. Also considered are such topics…

  16. Cloud Computing Applications in Support of Earth Science Activities at Marshall Space Flight Center

    Science.gov (United States)

    Molthan, A.; Limaye, A. S.

    2011-12-01

    Currently, the NASA Nebula Cloud Computing Platform is available to Agency personnel in a pre-release status as the system undergoes a formal operational readiness review. Over the past year, two projects within the Earth Science Office at NASA Marshall Space Flight Center have been investigating the performance and value of Nebula's "Infrastructure as a Service", or "IaaS" concept and applying cloud computing concepts to advance their respective mission goals. The Short-term Prediction Research and Transition (SPoRT) Center focuses on the transition of unique NASA satellite observations and weather forecasting capabilities for use within the operational forecasting community through partnerships with NOAA's National Weather Service (NWS). SPoRT has evaluated the performance of the Weather Research and Forecasting (WRF) model on virtual machines deployed within Nebula and used Nebula instances to simulate local forecasts in support of regional forecast studies of interest to select NWS forecast offices. In addition to weather forecasting applications, rapidly deployable Nebula virtual machines have supported the processing of high resolution NASA satellite imagery to support disaster assessment following the historic severe weather and tornado outbreak of April 27, 2011. Other modeling and satellite analysis activities are underway in support of NASA's SERVIR program, which integrates satellite observations, ground-based data and forecast models to monitor environmental change and improve disaster response in Central America, the Caribbean, Africa, and the Himalayas. Leveraging SPoRT's experience, SERVIR is working to establish a real-time weather forecasting model for Central America. Other modeling efforts include hydrologic forecasts for Kenya, driven by NASA satellite observations and reanalysis data sets provided by the broader meteorological community. Forecast modeling efforts are supplemented by short-term forecasts of convective initiation, determined by

  17. Initial Flight Test of the Production Support Flight Control Computers at NASA Dryden Flight Research Center

    Science.gov (United States)

    Carter, John; Stephenson, Mark

    1999-01-01

    The NASA Dryden Flight Research Center has completed the initial flight test of a modified set of F/A-18 flight control computers that gives the aircraft a research control law capability. The production support flight control computers (PSFCC) provide an increased capability for flight research in the control law, handling qualities, and flight systems areas. The PSFCC feature a research flight control processor that is "piggybacked" onto the baseline F/A-18 flight control system. This research processor allows for pilot selection of research control law operation in flight. To validate flight operation, a replication of a standard F/A-18 control law was programmed into the research processor and flight-tested over a limited envelope. This paper provides a brief description of the system, summarizes the initial flight test of the PSFCC, and describes future experiments for the PSFCC.

  18. Annual report of R and D activities in center for promotion of computational science and engineering from April 1, 2003 to March 31, 2004

    International Nuclear Information System (INIS)

    2005-08-01

    Major Research and development activities of Center for Promotion of Computational Science and Engineering (CCSE), JAERI, have focused on ITBL (IT Based Laboratory) project, computational material science and Quantum Bioinformatics. This report provides an overview of research and development activities in (CCSE) in the fiscal year 2003 (April 1, 2003 - March 31, 2004). (author)

  19. Environmental Modeling Center

    Data.gov (United States)

    Federal Laboratory Consortium — The Environmental Modeling Center provides the computational tools to perform geostatistical analysis, to model ground water and atmospheric releases for comparison...

  20. Cloud Computing Applications in Support of Earth Science Activities at Marshall Space Flight Center

    Science.gov (United States)

    Molthan, Andrew L.; Limaye, Ashutosh S.; Srikishen, Jayanthi

    2011-01-01

    Currently, the NASA Nebula Cloud Computing Platform is available to Agency personnel in a pre-release status as the system undergoes a formal operational readiness review. Over the past year, two projects within the Earth Science Office at NASA Marshall Space Flight Center have been investigating the performance and value of Nebula s "Infrastructure as a Service", or "IaaS" concept and applying cloud computing concepts to advance their respective mission goals. The Short-term Prediction Research and Transition (SPoRT) Center focuses on the transition of unique NASA satellite observations and weather forecasting capabilities for use within the operational forecasting community through partnerships with NOAA s National Weather Service (NWS). SPoRT has evaluated the performance of the Weather Research and Forecasting (WRF) model on virtual machines deployed within Nebula and used Nebula instances to simulate local forecasts in support of regional forecast studies of interest to select NWS forecast offices. In addition to weather forecasting applications, rapidly deployable Nebula virtual machines have supported the processing of high resolution NASA satellite imagery to support disaster assessment following the historic severe weather and tornado outbreak of April 27, 2011. Other modeling and satellite analysis activities are underway in support of NASA s SERVIR program, which integrates satellite observations, ground-based data and forecast models to monitor environmental change and improve disaster response in Central America, the Caribbean, Africa, and the Himalayas. Leveraging SPoRT s experience, SERVIR is working to establish a real-time weather forecasting model for Central America. Other modeling efforts include hydrologic forecasts for Kenya, driven by NASA satellite observations and reanalysis data sets provided by the broader meteorological community. Forecast modeling efforts are supplemented by short-term forecasts of convective initiation, determined by

  1. Energy efficient thermal management of data centers

    CERN Document Server

    Kumar, Pramod

    2012-01-01

    Energy Efficient Thermal Management of Data Centers examines energy flow in today's data centers. Particular focus is given to the state-of-the-art thermal management and thermal design approaches now being implemented across the multiple length scales involved. The impact of future trends in information technology hardware, and emerging software paradigms such as cloud computing and virtualization, on thermal management are also addressed. The book explores computational and experimental characterization approaches for determining temperature and air flow patterns within data centers. Thermodynamic analyses using the second law to improve energy efficiency are introduced and used in proposing improvements in cooling methodologies. Reduced-order modeling and robust multi-objective design of next generation data centers are discussed. This book also: Provides in-depth treatment of energy efficiency ideas based on  fundamental heat transfer, fluid mechanics, thermodynamics, controls, and computer science Focus...

  2. The Development of University Computing in Sweden 1965-1985

    Science.gov (United States)

    Dahlstrand, Ingemar

    In 1965-70 the government agency, Statskontoret, set up five university computing centers, as service bureaux financed by grants earmarked for computer use. The centers were well equipped and staffed and caused a surge in computer use. When the yearly flow of grant money stagnated at 25 million Swedish crowns, the centers had to find external income to survive and acquire time-sharing. But the charging system led to the computers not being fully used. The computer scientists lacked equipment for laboratory use. The centers were decentralized and the earmarking abolished. Eventually they got new tasks like running computers owned by the departments, and serving the university administration.

  3. Concurrent validity of an automated algorithm for computing the center of pressure excursion index (CPEI).

    Science.gov (United States)

    Diaz, Michelle A; Gibbons, Mandi W; Song, Jinsup; Hillstrom, Howard J; Choe, Kersti H; Pasquale, Maria R

    2018-01-01

    Center of Pressure Excursion Index (CPEI), a parameter computed from the distribution of plantar pressures during stance phase of barefoot walking, has been used to assess dynamic foot function. The original custom program developed to calculate CPEI required the oversight of a user who could manually correct for certain exceptions to the computational rules. A new fully automatic program has been developed to calculate CPEI with an algorithm that accounts for these exceptions. The purpose of this paper is to compare resulting CPEI values computed by these two programs on plantar pressure data from both asymptomatic and pathologic subjects. If comparable, the new program offers significant benefits-reduced potential for variability due to rater discretion and faster CPEI calculation. CPEI values were calculated from barefoot plantar pressure distributions during comfortable paced walking on 61 healthy asymptomatic adults, 19 diabetic adults with moderate hallux valgus, and 13 adults with mild hallux valgus. Right foot data for each subject was analyzed with linear regression and a Bland-Altman plot. The automated algorithm yielded CPEI values that were linearly related to the original program (R 2 =0.99; Pcomputation methods. Results of this analysis suggest that the new automated algorithm may be used to calculate CPEI on both healthy and pathologic feet. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Computer technology and computer programming research and strategies

    CERN Document Server

    Antonakos, James L

    2011-01-01

    Covering a broad range of new topics in computer technology and programming, this volume discusses encryption techniques, SQL generation, Web 2.0 technologies, and visual sensor networks. It also examines reconfigurable computing, video streaming, animation techniques, and more. Readers will learn about an educational tool and game to help students learn computer programming. The book also explores a new medical technology paradigm centered on wireless technology and cloud computing designed to overcome the problems of increasing health technology costs.

  5. Senior Computational Scientist | Center for Cancer Research

    Science.gov (United States)

    The Basic Science Program (BSP) pursues independent, multidisciplinary research in basic and applied molecular biology, immunology, retrovirology, cancer biology, and human genetics. Research efforts and support are an integral part of the Center for Cancer Research (CCR) at the Frederick National Laboratory for Cancer Research (FNLCR). The Cancer & Inflammation Program (CIP),

  6. Software package as an information center product

    International Nuclear Information System (INIS)

    Butler, M.K.

    1977-01-01

    The Argonne Code Center serves as a software exchange and information center for the U.S. Energy Research and Development Administration and the Nuclear Regulatory Commission. The goal of the Center's program is to provide a means for sharing of software among agency offices and contractors, and for transferring computing applications and technology, developed within the agencies, to the information-processing community. A major activity of the Code Center is the acquisition, review, testing, and maintenance of a collection of software--computer systems, applications programs, subroutines, modules, and data compilations--prepared by agency offices and contractors to meet programmatic needs. A brief review of the history of computer program libraries and software sharing is presented to place the Code Center activity in perspective. The state-of-the-art discussion starts off with an appropriate definition of the term software package, together with descriptions of recommended package contents and the Carter's package evaluation activity. An effort is made to identify the various users of the product, to enumerate their individual needs, to document the Center's efforts to meet these needs and the ongoing interaction with the user community. Desirable staff qualifications are considered, and packaging problems, reviewed. The paper closes with a brief look at recent developments and a forecast of things to come. 2 tables

  7. Rethinking Human-Centered Computing: Finding the Customer and Negotiated Interactions at the Airport

    Science.gov (United States)

    Wales, Roxana; O'Neill, John; Mirmalek, Zara

    2003-01-01

    The breakdown in the air transportation system over the past several years raises an interesting question for researchers: How can we help improve the reliability of airline operations? In offering some answers to this question, we make a statement about Huuman-Centered Computing (HCC). First we offer the definition that HCC is a multi-disciplinary research and design methodology focused on supporting humans as they use technology by including cognitive and social systems, computational tools and the physical environment in the analysis of organizational systems. We suggest that a key element in understanding organizational systems is that there are external cognitive and social systems (customers) as well as internal cognitive and social systems (employees) and that they interact dynamically to impact the organization and its work. The design of human-centered intelligent systems must take this outside-inside dynamic into account. In the past, the design of intelligent systems has focused on supporting the work and improvisation requirements of employees but has often assumed that customer requirements are implicitly satisfied by employee requirements. Taking a customer-centric perspective provides a different lens for understanding this outside-inside dynamic, the work of the organization and the requirements of both customers and employees In this article we will: 1) Demonstrate how the use of ethnographic methods revealed the important outside-inside dynamic in an airline, specifically the consequential relationship between external customer requirements and perspectives and internal organizational processes and perspectives as they came together in a changing environment; 2) Describe how taking a customer centric perspective identifies places where the impact of the outside-inside dynamic is most critical and requires technology that can be adaptive; 3) Define and discuss the place of negotiated interactions in airline operations, identifying how these

  8. Effort-reward imbalance and one-year change in neck-shoulder and upper extremity pain among call center computer operators.

    Science.gov (United States)

    Krause, Niklas; Burgel, Barbara; Rempel, David

    2010-01-01

    The literature on psychosocial job factors and musculoskeletal pain is inconclusive in part due to insufficient control for confounding by biomechanical factors. The aim of this study was to investigate prospectively the independent effects of effort-reward imbalance (ERI) at work on regional musculoskeletal pain of the neck and upper extremities of call center operators after controlling for (i) duration of computer use both at work and at home, (ii) ergonomic workstation design, (iii) physical activities during leisure time, and (iv) other individual worker characteristics. This was a one-year prospective study among 165 call center operators who participated in a randomized ergonomic intervention trial that has been described previously. Over an approximate four-week period, we measured ERI and 28 potential confounders via a questionnaire at baseline. Regional upper-body pain and computer use was measured by weekly surveys for up to 12 months following the implementation of ergonomic interventions. Regional pain change scores were calculated as the difference between average weekly pain scores pre- and post intervention. A significant relationship was found between high average ERI ratios and one-year increases in right upper-extremity pain after adjustment for pre-intervention regional mean pain score, current and past physical workload, ergonomic workstation design, and anthropometric, sociodemographic, and behavioral risk factors. No significant associations were found with change in neck-shoulder or left upper-extremity pain. This study suggests that ERI predicts regional upper-extremity pain in -computer operators working >or=20 hours per week. Control for physical workload and ergonomic workstation design was essential for identifying ERI as a risk factor.

  9. Magnetic-fusion energy and computers

    International Nuclear Information System (INIS)

    Killeen, J.

    1982-01-01

    The application of computers to magnetic fusion energy research is essential. In the last several years the use of computers in the numerical modeling of fusion systems has increased substantially. There are several categories of computer models used to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies are also in use. To meet the needs of the fusion program, the National Magnetic Fusion Energy Computer Center has been established at the Lawrence Livermore National Laboratory. A large central computing facility is linked to smaller computer centers at each of the major MFE laboratories by a communication network. In addition to providing cost effective computing services, the NMFECC environment stimulates collaboration and the sharing of computer codes among the various fusion research groups

  10. Magnetic fusion energy and computers

    International Nuclear Information System (INIS)

    Killeen, J.

    1982-01-01

    The application of computers to magnetic fusion energy research is essential. In the last several years the use of computers in the numerical modeling of fusion systems has increased substantially. There are several categories of computer models used to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies are also in use. To meet the needs of the fusion program, the National Magnetic Fusion Energy Computer Center has been established at the Lawrence Livermore National Laboratory. A large central computing facility is linked to smaller computer centers at each of the major MFE laboratories by a communication network. In addition to providing cost effective computing services, the NMFECC environment stimulates collaboration and the sharing of computer codes among the various fusion research groups

  11. Guide to making time-lapse graphics using the facilities of the National Magnetic Fusion Energy Computing Center

    International Nuclear Information System (INIS)

    Munro, J.K. Jr.

    1980-05-01

    The advent of large, fast computers has opened the way to modeling more complex physical processes and to handling very large quantities of experimental data. The amount of information that can be processed in a short period of time is so great that use of graphical displays assumes greater importance as a means of displaying this information. Information from dynamical processes can be displayed conveniently by use of animated graphics. This guide presents the basic techniques for generating black and white animated graphics, with consideration of aesthetic, mechanical, and computational problems. The guide is intended for use by someone who wants to make movies on the National Magnetic Fusion Energy Computing Center (NMFECC) CDC-7600. Problems encountered by a geographically remote user are given particular attention. Detailed information is given that will allow a remote user to do some file checking and diagnosis before giving graphics files to the system for processing into film in order to spot problems without having to wait for film to be delivered. Source listings of some useful software are given in appendices along with descriptions of how to use it. 3 figures, 5 tables

  12. Neutron Flux and Activation Calculations for a High Current Deuteron Accelerator

    CERN Document Server

    Coniglio, Angela; Sandri, Sandro

    2005-01-01

    Neutron analysis of the first Neutral Beam (NB) for the International Thermonuclear Experimental Reactor (ITER) was performed to provide the basis for the study of the following main aspects: personnel safety during normal operation and maintenance, radiation shielding design, transportability of the NB components in the European countries. The first ITER NB is a medium energy light particle accelerator. In the scenario considered for the calculation the accelerated particles are negative deuterium ions with maximum energy of 1 MeV. The average beam current is 13.3 A. To assess neutron transport in the ITER NB structure a mathematical model of the components geometry was implemented into MCNP computer code (MCNP version 4c2. "Monte Carlo N-Particle Transport Code System." RSICC Computer Code Collection. June 2001). The neutron source definition was outlined considering both D-D and D-T neutron production. FISPACT code (R.A. Forrest, FISPACT-2003. EURATOM/UKAEA Fusion, December 2002) was used to assess neutron...

  13. Annual report of R and D activities in Center for Computational Science and e-Systems from April 1, 2006 to March 31, 2007

    International Nuclear Information System (INIS)

    2008-03-01

    This report provides an overview of the research and development activities of the Center for Computational Science and e-Systems (CCSE), JAEA in fiscal year 2006 (April 1, 2006 - March 31, 2007). These research and development activities have been performed by the Simulation Technology Research and Development Office and the Computer Science Research and Development Office. The primary results of the research and development activities are the development of simulation techniques for a virtual earthquake testbed, an intelligent infrastructure for atomic energy research, computational biological disciplines to predict DNA repair function of protein, and material models for a neutron detection device, crack propagation, and gas bubble formation in nuclear fuel. (author)

  14. Fluid dynamics parallel computer development at NASA Langley Research Center

    Science.gov (United States)

    Townsend, James C.; Zang, Thomas A.; Dwoyer, Douglas L.

    1987-01-01

    To accomplish more detailed simulations of highly complex flows, such as the transition to turbulence, fluid dynamics research requires computers much more powerful than any available today. Only parallel processing on multiple-processor computers offers hope for achieving the required effective speeds. Looking ahead to the use of these machines, the fluid dynamicist faces three issues: algorithm development for near-term parallel computers, architecture development for future computer power increases, and assessment of possible advantages of special purpose designs. Two projects at NASA Langley address these issues. Software development and algorithm exploration is being done on the FLEX/32 Parallel Processing Research Computer. New architecture features are being explored in the special purpose hardware design of the Navier-Stokes Computer. These projects are complementary and are producing promising results.

  15. View-CXS neutron and photon cross-sections viewer

    International Nuclear Information System (INIS)

    Subbaiah, K.V.; Sunil Sunny, C.

    2004-01-01

    A graphical user-friendly interface is developed in Visual Basic (VB)-6 to view the variation of neutron and photon interaction cross-sections of different isotopes as a function of energy. VB subroutines developed read the binary data files of cross-sections created in MCNP-ACE (Briesmeister, J.F., 1993. MCNP - a general purpose Monte Carlo N-Particle Transport code. Version 4A. LANL, USA), ANISN-DLC (Engle W.W. Jr., 1967, A User's Manual for ANISN, K-1693; ORNL, 1974. 100 group neutron cross section data based on ENDF/B-III. Oak Ridge National Laboratory, USA) and KENO-AMPX (Petrie, L.M., Landers, N.F., 1984 KENO-Va- An Improved Monte Carlo Criticality Program with Super Grouping. RSICC-CCC-548, USA) formats using LAHEY-77 Fortran Compiler. The information on isotopes present in each library will be displayed with the help of database files prepared using Micro-Soft ACESS. The cross-section data can be viewed in different presentation styles namely, line graphs, bar graphs, histograms etc., with different color and symbol options. The cross-section plots generated can be saved as Bit-Map file to embed in any other text files. This software enables inter comparison of cross-sections from different type of libraries for isotopes as well as mixtures. Provision is made to view the cross-sections for nuclear reactions such as (n,γ), (n,f), (n,α), etc. The software can be obtained from Radiation Safety Information and Computational Centre (RSICC), ORNL, USA with the code package identification number PSR-514. The software package needs a hard disk space of about 80 MB when installed and works in WINDOWS-95/98/2000 operating systems

  16. Benefits Analysis of Multi-Center Dynamic Weather Routes

    Science.gov (United States)

    Sheth, Kapil; McNally, David; Morando, Alexander; Clymer, Alexis; Lock, Jennifer; Petersen, Julien

    2014-01-01

    Dynamic weather routes are flight plan corrections that can provide airborne flights more than user-specified minutes of flying-time savings, compared to their current flight plan. These routes are computed from the aircraft's current location to a flight plan fix downstream (within a predefined limit region), while avoiding forecasted convective weather regions. The Dynamic Weather Routes automation has been continuously running with live air traffic data for a field evaluation at the American Airlines Integrated Operations Center in Fort Worth, TX since July 31, 2012, where flights within the Fort Worth Air Route Traffic Control Center are evaluated for time savings. This paper extends the methodology to all Centers in United States and presents benefits analysis of Dynamic Weather Routes automation, if it was implemented in multiple airspace Centers individually and concurrently. The current computation of dynamic weather routes requires a limit rectangle so that a downstream capture fix can be selected, preventing very large route changes spanning several Centers. In this paper, first, a method of computing a limit polygon (as opposed to a rectangle used for Fort Worth Center) is described for each of the 20 Centers in the National Airspace System. The Future ATM Concepts Evaluation Tool, a nationwide simulation and analysis tool, is used for this purpose. After a comparison of results with the Center-based Dynamic Weather Routes automation in Fort Worth Center, results are presented for 11 Centers in the contiguous United States. These Centers are generally most impacted by convective weather. A breakdown of individual Center and airline savings is presented and the results indicate an overall average savings of about 10 minutes of flying time are obtained per flight.

  17. Relative Lyapunov Center Bifurcations

    DEFF Research Database (Denmark)

    Wulff, Claudia; Schilder, Frank

    2014-01-01

    Relative equilibria (REs) and relative periodic orbits (RPOs) are ubiquitous in symmetric Hamiltonian systems and occur, for example, in celestial mechanics, molecular dynamics, and rigid body motion. REs are equilibria, and RPOs are periodic orbits of the symmetry reduced system. Relative Lyapunov...... center bifurcations are bifurcations of RPOs from REs corresponding to Lyapunov center bifurcations of the symmetry reduced dynamics. In this paper we first prove a relative Lyapunov center theorem by combining recent results on the persistence of RPOs in Hamiltonian systems with a symmetric Lyapunov...... center theorem of Montaldi, Roberts, and Stewart. We then develop numerical methods for the detection of relative Lyapunov center bifurcations along branches of RPOs and for their computation. We apply our methods to Lagrangian REs of the N-body problem....

  18. Building a cluster computer for the computing grid of tomorrow

    International Nuclear Information System (INIS)

    Wezel, J. van; Marten, H.

    2004-01-01

    The Grid Computing Centre Karlsruhe takes part in the development, test and deployment of hardware and cluster infrastructure, grid computing middleware, and applications for particle physics. The construction of a large cluster computer with thousands of nodes and several PB data storage capacity is a major task and focus of research. CERN based accelerator experiments will use GridKa, one of only 8 world wide Tier-1 computing centers, for its huge computer demands. Computing and storage is provided already for several other running physics experiments on the exponentially expanding cluster. (orig.)

  19. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    Energy Technology Data Exchange (ETDEWEB)

    Kostadin, Damevski [Virginia State Univ., Petersburg, VA (United States)

    2015-01-25

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  20. Production Support Flight Control Computers: Research Capability for F/A-18 Aircraft at Dryden Flight Research Center

    Science.gov (United States)

    Carter, John F.

    1997-01-01

    NASA Dryden Flight Research Center (DFRC) is working with the United States Navy to complete ground testing and initiate flight testing of a modified set of F/A-18 flight control computers. The Production Support Flight Control Computers (PSFCC) can give any fleet F/A-18 airplane an in-flight, pilot-selectable research control law capability. NASA DFRC can efficiently flight test the PSFCC for the following four reasons: (1) Six F/A-18 chase aircraft are available which could be used with the PSFCC; (2) An F/A-18 processor-in-the-loop simulation exists for validation testing; (3) The expertise has been developed in programming the research processor in the PSFCC; and (4) A well-defined process has been established for clearing flight control research projects for flight. This report presents a functional description of the PSFCC. Descriptions of the NASA DFRC facilities, PSFCC verification and validation process, and planned PSFCC projects are also provided.

  1. Final Report. Center for Scalable Application Development Software

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [Rice Univ., Houston, TX (United States)

    2014-10-26

    The Center for Scalable Application Development Software (CScADS) was established as a part- nership between Rice University, Argonne National Laboratory, University of California Berkeley, University of Tennessee – Knoxville, and University of Wisconsin – Madison. CScADS pursued an integrated set of activities with the aim of increasing the productivity of DOE computational scientists by catalyzing the development of systems software, libraries, compilers, and tools for leadership computing platforms. Principal Center activities were workshops to engage the research community in the challenges of leadership computing, research and development of open-source software, and work with computational scientists to help them develop codes for leadership computing platforms. This final report summarizes CScADS activities at Rice University in these areas.

  2. An Audit on the Appropriateness of Coronary Computed Tomography Angiography Referrals in a Tertiary Cardiac Center.

    Science.gov (United States)

    Alderazi, Ahmed Ali; Lynch, Mary

    2017-01-01

    In response to growing concerns regarding the overuse of coronary computed tomography angiography (CCTA) in the clinical setting, multiple societies, including the American College of Cardiology Foundation, have jointly published revised criteria regarding the appropriate use of this imaging modality. However, previous research indicates significant discrepancies in the rate of adherence to these guidelines. To assess the appropriateness of CCTA referrals in a tertiary cardiac center in Bahrain. This retrospective clinical audit examined the records of patients referred to CCTA between the April 1, 2015 and December 31, 2015 in Mohammed bin Khalifa Cardiac Center. Using information from medical records, each case was meticulously audited against guidelines to categorize it as appropriate, inappropriate, or uncertain. Of the 234 records examined, 176 (75.2%) were appropriate, 47 (20.1%) were uncertain, and 11 (4.7%) were inappropriate. About 74.4% of all referrals were to investigate coronary artery disease (CAD). The most common indication that was deemed appropriate was the detection of CAD in the setting of suspected ischemic equivalent in patients with an intermediate pretest probability of CAD (65.9%). Most referrals deemed inappropriate were requested to detect CAD in asymptomatic patients at low or intermediate risk of CAD (63.6%). This audit demonstrates a relatively low rate of inappropriate CCTA referrals, indicating the appropriate and efficient use of this resource in the Mohammed bin Khalifa Cardiac Center. Agreement on and reclassification of "uncertain" cases by guideline authorities would facilitate a deeper understanding of referral appropriateness.

  3. Cone-beam Computed Tomographic Assessment of Canal Centering Ability and Transportation after Preparation with Twisted File and Bio RaCe Instrumentation.

    Directory of Open Access Journals (Sweden)

    Kiamars Honardar

    2014-08-01

    Full Text Available Use of rotary Nickel-Titanium (NiTi instruments for endodontic preparation has introduced a new era in endodontic practice, but this issue has undergone dramatic modifications in order to achieve improved shaping abilities. Cone-beam computed tomography (CBCT has made it possible to accurately evaluate geometrical changes following canal preparation. This study was carried out to compare canal centering ability and transportation of Twisted File and BioRaCe rotary systems by means of cone-beam computed tomography.Thirty root canals from freshly extracted mandibular and maxillary teeth were selected. Teeth were mounted and scanned before and after preparation by CBCT at different apical levels. Specimens were divided into 2 groups of 15. In the first group Twisted File and in the second, BioRaCe was used for canal preparation. Canal transportation and centering ability after preparation were assessed by NNT Viewer and Photoshop CS4 software. Statistical analysis was performed using t-test and two-way ANOVA.All samples showed deviations from the original axes of the canals. No significant differences were detected between the two rotary NiTi instruments for canal centering ability in all sections. Regarding canal transportation however, a significant difference was seen in the BioRaCe group at 7.5mm from the apex.Under the conditions of this in vitro study, Twisted File and BioRaCe rotary NiTi files retained original canal geometry.

  4. International Conference of Intelligence Computation and Evolutionary Computation ICEC 2012

    CERN Document Server

    Intelligence Computation and Evolutionary Computation

    2013-01-01

    2012 International Conference of Intelligence Computation and Evolutionary Computation (ICEC 2012) is held on July 7, 2012 in Wuhan, China. This conference is sponsored by Information Technology & Industrial Engineering Research Center.  ICEC 2012 is a forum for presentation of new research results of intelligent computation and evolutionary computation. Cross-fertilization of intelligent computation, evolutionary computation, evolvable hardware and newly emerging technologies is strongly encouraged. The forum aims to bring together researchers, developers, and users from around the world in both industry and academia for sharing state-of-art results, for exploring new areas of research and development, and to discuss emerging issues facing intelligent computation and evolutionary computation.

  5. Activity-based computing: computational management of activities reflecting human intention

    DEFF Research Database (Denmark)

    Bardram, Jakob E; Jeuris, Steven; Houben, Steven

    2015-01-01

    paradigm that has been applied in personal information management applications as well as in ubiquitous, multidevice, and interactive surface computing. ABC has emerged as a response to the traditional application- and file-centered computing paradigm, which is oblivious to a notion of a user’s activity...

  6. Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Anisenkov, A; Belov, S; Kaplin, V; Korol, A; Skovpen, K; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2012-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.

  7. Handbook on data centers

    CERN Document Server

    Khan, Samee Ullah

    2015-01-01

    This handbook offers a comprehensive review of the state-of-the-art research achievements in the field of data centers. Contributions from international, leading researchers and scholars offer topics in cloud computing, virtualization in data centers, energy efficient data centers, and next generation data center architecture.  It also comprises current research trends in emerging areas, such as data security, data protection management, and network resource management in data centers. Specific attention is devoted to industry needs associated with the challenges faced by data centers, such as various power, cooling, floor space, and associated environmental health and safety issues, while still working to support growth without disrupting quality of service. The contributions cut across various IT data technology domains as a single source to discuss the interdependencies that need to be supported to enable a virtualized, next-generation, energy efficient, economical, and environmentally friendly data cente...

  8. Technical Note: A simulation study on the feasibility of radiotherapy dose enhancement with calcium tungstate and hafnium oxide nano- and microparticles.

    Science.gov (United States)

    Sherck, Nicholas J; Won, You-Yeon

    2017-12-01

    To assess the radiotherapy dose enhancement (RDE) potential of calcium tungstate (CaWO 4 ) and hafnium oxide (HfO 2 ) nano- and microparticles (NPs). A Monte Carlo simulation study was conducted to gauge their respective RDE potentials relative to that of the broadly studied gold (Au) NP. The study was warranted due to the promising clinical and preclinical studies involving both CaWO 4 and HfO 2 NPs as RDE agents in the treatment of various types of cancers. The study provides a baseline RDE to which future experimental RDE trends can be compared to. All three materials were investigated in silico with the software Penetration and Energy Loss of Positrons and Electrons (PENELOPE 2014) developed by Francesc Salvat and distributed in the United States by the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory. The work utilizes the extensively studied Au NP as the "gold standard" for a baseline. The key metric used in the evaluation of the materials was the local dose enhancement factor (DEF loc ). An additional metric used, termed the relative enhancement ratio (RER), evaluates material performance at the same mass concentrations. The results of the study indicate that Au has the strongest RDE potential using the DEF loc metric. HfO 2 and CaWO 4 both underperformed relative to Au with lower DEF loc of 2-3 × and 4-100 ×, respectively. The computational investigation predicts the RDE performance ranking to be: Au > HfO 2 > CaWO 4 . © 2017 American Association of Physicists in Medicine.

  9. NEAMS Software Licensing, Release, and Distribution: Implications for FY2013 Work Package Planning

    International Nuclear Information System (INIS)

    Bernholdt, David E.

    2012-01-01

    The vision of the NEAMS program is to bring truly predictive modeling and simulation (M and S) capabilities to the nuclear engineering community in order to enable a new approach to the analysis of nuclear systems. NEAMS anticipates issuing in FY 2018 a full release of its computational 'Fermi Toolkit' aimed at advanced reactor and fuel cycles. The NEAMS toolkit involves extensive software development activities, some of which have already been underway for several years, however, the Advanced Modeling and Simulation Office (AMSO), which sponsors the NEAMS program, has not yet issued any official guidance regarding software licensing, release, and distribution policies. This motivated an FY12 task in the Capability Transfer work package to develop and recommend an appropriate set of policies. The current preliminary report is intended to provide awareness of issues with implications for work package planning for FY13. We anticipate a small amount of effort associated with putting into place formal licenses and contributor agreements for NEAMS software which doesn't already have them. We do not anticipate any additional effort or costs associated with software release procedures or schedules beyond those dictated by the quality expectations for the software. The largest potential costs we anticipate would be associated with the setup and maintenance of shared code repositories for development and early access to NEAMS software products. We also anticipate an opportunity, with modest associated costs, to work with the Radiation Safety Information Computational Center (RSICC) to clarify export control assessment policies for software under development.

  10. Networking at NASA. Johnson Space Center

    Science.gov (United States)

    Garman, John R.

    1991-01-01

    A series of viewgraphs on computer networks at the Johnson Space Center (JSC) are given. Topics covered include information resource management (IRM) at JSC, the IRM budget by NASA center, networks evolution, networking as a strategic tool, the Information Services Directorate charter, and SSC network requirements, challenges, and status.

  11. Annual report of R and D activities in center for promotion of computational science and engineering from April 1, 2004 to March 31, 2005

    International Nuclear Information System (INIS)

    2005-09-01

    This report provides an overview of research and development activities in Center for Promotion of Computational Science and Engineering (CCSE), JAERI, in the fiscal year 2004 (April 1, 2004 - March 31, 2005). The activities have been performed by Research Group for Computational Science in Atomic Energy, Research Group for Computational Material Science in Atomic Energy, R and D Group for Computer Science, R and D Group for Numerical Experiments, and Quantum Bioinformatics Group in CCSE. The ITBL (Information Technology Based Laboratory) project is performed mainly by the R and D Group for Computer Science and the Research Group for Computational Science in Atomic Energy. According to the mid-term evaluation for the ITBL project conducted by the MEXT, the achievement of the ITBL infrastructure software developed by JAERI has been remarked as outstanding at the 13th Information Science and Technology Committee in the Subdivision on R and D Planning and Evaluation of the Council for Science and Technology on April 26th, 2004. (author)

  12. Establishment of computed tomography reference dose levels in Onassis Cardiac Surgery Center

    International Nuclear Information System (INIS)

    Tsapaki, V.; Kyrozi, E.; Syrigou, T.; Mastorakou, I.; Kottou, S.

    2001-01-01

    The purpose of the study was to apply European Commission (EC) Reference Dose Levels (RDL) in Computed Tomography (CT) examinations at Onassis Cardiac Surgery Center (OCSC). These are weighted CT Dose Index (CTDI w ) for a single slice and Dose-Length Product (DLP) for a complete examination. During the period 1998-1999, the total number of CT examinations, every type of CT examination, patient related data and technical parameters of the examinations were recorded. The most frequent examinations were chosen for investigation which were the head, chest, abdomen and pelvis. CTDI measurements were performed and CTDI w and DLP were calculated. Third Quartile values of CTDI w were chosen to be 43mGy for head, 8mGy for chest, and 22mGy for abdomen and pelvis examinations. Third quartile values of DLP were chosen to be 740mGycm for head, 370mGycm for chest, 490mGycm for abdomen and 420mGycm for pelvis examination. Results confirm that OCSC follows successfully the proposed RDL for the head, chest, abdomen and pelvis examinations in terms of radiation dose. (author)

  13. Electricity Infrastructure Operations Center (EIOC)

    Data.gov (United States)

    Federal Laboratory Consortium — The Electricity Infrastructure Operations Center (EIOC) at PNNL brings together industry-leading software, real-time grid data, and advanced computation into a fully...

  14. USSR Report, Cybernetics Computers and Automation Technology

    Science.gov (United States)

    1985-09-05

    organization, the SKALD program utilizes a dictionary or data base to generate SKALD poetry at the computer center of Minsk State Pedagogical ...wonderful capabilities at the^ Krasnoyarsk branch of the USSR AN [Academy of Sciences] Siberian section’s Computer Center. They began training the kids

  15. High-End Scientific Computing

    Science.gov (United States)

    EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.

  16. Performance indicators for call centers with impatience

    NARCIS (Netherlands)

    Jouini, O.; Koole, G.M.; Roubos, A.

    2013-01-01

    An important feature of call center modeling is the presence of impatient customers. This article considers single-skill call centers including customer abandonments. A number of different service-level definitions are structured, including all those used in practice, and the explicit computation of

  17. Datacenter Changes vs. Employment Rates for Datacenter Managers In the Cloud Computing Era

    OpenAIRE

    Mirzoev, Timur; Benson, Bruce; Hillhouse, David; Lewis, Mickey

    2014-01-01

    Due to the evolving Cloud Computing paradigm, there is a prevailing concern that in the near future data center managers may be in short supply. Cloud computing, as a whole, is becoming more prevalent into today s computing world. In fact, cloud computing has become so popular that some are now referring to data centers as cloud centers. How does this interest in cloud computing translate into employment rates for data center managers? The popularity of the public and private cloud models are...

  18. User perspectives on computer applications

    International Nuclear Information System (INIS)

    Trammell, H.E.

    1979-04-01

    Experiences of a technical group that uses the services of computer centers are recounted. An orientation on the ORNL Engineering Technology Division and its missions is given to provide background on the diversified efforts undertaken by the Division and its opportunities to benefit from computer technology. Specific ways in which computers are used within the Division are described; these include facility control, data acquisition, data analysis, theory applications, code development, information processing, cost control, management of purchase requisitions, maintenance of personnel information, and control of technical publications. Problem areas found to need improvement are the overloading of computers during normal working hours, lack of code transportability, delay in obtaining routine programming, delay in key punching services, bewilderment in the use of large computer centers, complexity of job control language, and uncertain quality of software. 20 figures

  19. Computer-assisted optimization of chest fluoroscopy

    International Nuclear Information System (INIS)

    Korolyuk, I.P.; Filippova, N.V.; Kirillov, L.P.; Momsenko, S.F.

    1987-01-01

    The main trends in the use of computer for the optimization of chest fluorography among employees and workers of a large industrial enterprise are considered. The following directions were determined: automatted sorting of fluorograms, formalization of X-ray signs in describing fluorograms, organization of a special system of fluorographic data management. Four levels of algorithms to solve the problems of fluorography were considered: 1) shops, personnel department, etc.; 2) an automated center for mass screening and a medical unit; 3) a computer center and 4) planning and management service. The results of computer use over a 3-year period were analyzed. The efficacy of computer was shown

  20. Computational sustainability

    CERN Document Server

    Kersting, Kristian; Morik, Katharina

    2016-01-01

    The book at hand gives an overview of the state of the art research in Computational Sustainability as well as case studies of different application scenarios. This covers topics such as renewable energy supply, energy storage and e-mobility, efficiency in data centers and networks, sustainable food and water supply, sustainable health, industrial production and quality, etc. The book describes computational methods and possible application scenarios.

  1. Design and analysis of a tendon-based computed tomography-compatible robot with remote center of motion for lung biopsy.

    Science.gov (United States)

    Yang, Yunpeng; Jiang, Shan; Yang, Zhiyong; Yuan, Wei; Dou, Huaisu; Wang, Wei; Zhang, Daguang; Bian, Yuan

    2017-04-01

    Nowadays, biopsy is a decisive method of lung cancer diagnosis, whereas lung biopsy is time-consuming, complex and inaccurate. So a computed tomography-compatible robot for rapid and precise lung biopsy is developed in this article. According to the actual operation process, the robot is divided into two modules: 4-degree-of-freedom position module for location of puncture point is appropriate for patient's almost all positions and 3-degree-of-freedom tendon-based orientation module with remote center of motion is compact and computed tomography-compatible to orientate and insert needle automatically inside computed tomography bore. The workspace of the robot surrounds patient's thorax, and the needle tip forms a cone under patient's skin. A new error model of the robot based on screw theory is proposed in view of structure error and actuation error, which are regarded as screw motions. Simulation is carried out to verify the precision of the error model contrasted with compensation via inverse kinematics. The results of insertion experiment on specific phantom prove the feasibility of the robot with mean error of 1.373 mm in laboratory environment, which is accurate enough to replace manual operation.

  2. McMaster University: College and University Computing Environment.

    Science.gov (United States)

    CAUSE/EFFECT, 1988

    1988-01-01

    The computing and information services (CIS) organization includes administrative computing, academic computing, and networking and has three divisions: computing services, development services, and information services. Other computing activities include Health Sciences, Humanities Computing Center, and Department of Computer Science and Systems.…

  3. Root Canal Transportation and Centering Ability of Nickel-Titanium Rotary Instruments in Mandibular Premolars Assessed Using Cone-Beam Computed Tomography.

    Science.gov (United States)

    Mamede-Neto, Iussif; Borges, Alvaro Henrique; Guedes, Orlando Aguirre; de Oliveira, Durvalino; Pedro, Fábio Luis Miranda; Estrela, Carlos

    2017-01-01

    The aim of this study was to evaluate, using cone-beam computed tomography (CBCT), transportation and centralization of different nickel-titanium (NiTi) rotary instruments. One hundred and twenty eight mandibular premolars were selected and instrumented using the following brands of NiTi files: WaveOne, WaveOne Gold, Reciproc, ProTaper Next, ProTaper Gold, Mtwo, BioRaCe and RaCe. CBCT imaging was performed before and after root canal preparation to obtain measurements of mesial and distal dentin walls and calculations of root canal transportation and centralization. A normal distribution of data was confirmed by the Kolmogorov-Smirnov and Levene tests, and results were assessed using the Kruskal-Wallis test. Statistical significance was set at 5%. ProTaper Gold produced the lowest canal transportation values, and RaCe, the highest. ProTaper Gold files also showed the highest values for centering ability, whereas BioRaCe showed the lowest. No significant differences were found across the different instruments in terms of canal transportation and centering ability (P > 0.05). Based on the methodology employed, all instruments used for root canal preparation of mandibular premolars performed similarly with regard to canal transportation and centering ability.

  4. Computer systems and software engineering

    Science.gov (United States)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  5. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  6. Computer applications in controlled fusion research

    International Nuclear Information System (INIS)

    Killeen, J.

    1975-01-01

    The application of computers to controlled thermonuclear research (CTR) is essential. In the near future the use of computers in the numerical modeling of fusion systems should increase substantially. A recent panel has identified five categories of computational models to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies is called for. The development and application of computer codes to implement these models is a vital step in reaching the goal of fusion power. To meet the needs of the fusion program the National CTR Computer Center has been established at the Lawrence Livermore Laboratory. A large central computing facility is linked to smaller computing centers at each of the major CTR Laboratories by a communication network. The crucial element needed for success is trained personnel. The number of people with knowledge of plasma science and engineering trained in numerical methods and computer science must be increased substantially in the next few years. Nuclear engineering departments should encourage students to enter this field and provide the necessary courses and research programs in fusion computing

  7. A large-scale computer facility for computational aerodynamics

    International Nuclear Information System (INIS)

    Bailey, F.R.; Balhaus, W.F.

    1985-01-01

    The combination of computer system technology and numerical modeling have advanced to the point that computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. To provide for further advances in modeling of aerodynamic flow fields, NASA has initiated at the Ames Research Center the Numerical Aerodynamic Simulation (NAS) Program. The objective of the Program is to develop a leading-edge, large-scale computer facility, and make it available to NASA, DoD, other Government agencies, industry and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. The Program will establish an initial operational capability in 1986 and systematically enhance that capability by incorporating evolving improvements in state-of-the-art computer system technologies as required to maintain a leadership role. This paper briefly reviews the present and future requirements for computational aerodynamics and discusses the Numerical Aerodynamic Simulation Program objectives, computational goals, and implementation plans

  8. Computer-Based Training in Eating and Nutrition Facilitates Person-Centered Hospital Care: A Group Concept Mapping Study.

    Science.gov (United States)

    Westergren, Albert; Edfors, Ellinor; Norberg, Erika; Stubbendorff, Anna; Hedin, Gita; Wetterstrand, Martin; Rosas, Scott R; Hagell, Peter

    2018-04-01

    Studies have shown that computer-based training in eating and nutrition for hospital nursing staff increased the likelihood that patients at risk of undernutrition would receive nutritional interventions. This article seeks to provide understanding from the perspective of nursing staff of conceptually important areas for computer-based nutritional training, and their relative importance to nutritional care, following completion of the training. Group concept mapping, an integrated qualitative and quantitative methodology, was used to conceptualize important factors relating to the training experiences through four focus groups (n = 43), statement sorting (n = 38), and importance rating (n = 32), followed by multidimensional scaling and cluster analysis. Sorting of 38 statements yielded four clusters. These clusters (number of statements) were as follows: personal competence and development (10), practice close care development (10), patient safety (9), and awareness about the nutrition care process (9). First and second clusters represented "the learning organization," and third and fourth represented "quality improvement." These findings provide a conceptual basis for understanding the importance of training in eating and nutrition, which contributes to a learning organization and quality improvement, and can be linked to and facilitates person-centered nutritional care and patient safety.

  9. Applied technology center business plan and market survey

    Science.gov (United States)

    Hodgin, Robert F.; Marchesini, Roberto

    1990-01-01

    Business plan and market survey for the Applied Technology Center (ATC), computer technology transfer and development non-profit corporation, is presented. The mission of the ATC is to stimulate innovation in state-of-the-art and leading edge computer based technology. The ATC encourages the practical utilization of late-breaking computer technologies by firms of all variety.

  10. 76 FR 14669 - Privacy Act of 1974; CMS Computer Match No. 2011-02; HHS Computer Match No. 1007

    Science.gov (United States)

    2011-03-17

    ... 1974; CMS Computer Match No. 2011-02; HHS Computer Match No. 1007 AGENCY: Department of Health and Human Services (HHS), Centers for Medicare & Medicaid Services (CMS). ACTION: Notice of computer... notice establishes a computer matching agreement between CMS and the Department of Defense (DoD). We have...

  11. The combinatorics computation for Casimir operators of the symplectic Lie algebra and the application for determining the center of the enveloping algebra of a semidirect product

    International Nuclear Information System (INIS)

    Le Van Hop.

    1989-12-01

    The combinatorics computation is used to describe the Casimir operators of the symplectic Lie Algebra. This result is applied for determining the Center of the enveloping Algebra of the semidirect Product of the Heisenberg Lie Algebra and the symplectic Lie Algebra. (author). 10 refs

  12. The use of personal computers in reactor physics

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1988-01-01

    This paper points out that personal computers are now powerful enough (in terms of core size and speed) to allow them to be used for serious reactor physics applications. In addition the low cost of personal computers means that even small institutes can now have access to a significant amount of computer power. At the present time distribution centers, such as RSIC, are beginning to distribute reactor physics codes for use on personal computers; hopefully in the near future more and more of these codes will become available through distribution centers, such as RSIC

  13. Computational atomic and nuclear physics

    International Nuclear Information System (INIS)

    Bottcher, C.; Strayer, M.R.; McGrory, J.B.

    1990-01-01

    The evolution of parallel processor supercomputers in recent years provides opportunities to investigate in detail many complex problems, in many branches of physics, which were considered to be intractable only a few years ago. But to take advantage of these new machines, one must have a better understanding of how the computers organize their work than was necessary with previous single processor machines. Equally important, the scientist must have this understanding as well as a good understanding of the structure of the physics problem under study. In brief, a new field of computational physics is evolving, which will be led by investigators who are highly literate both computationally and physically. A Center for Computationally Intensive Problems has been established with the collaboration of the University of Tennessee Science Alliance, Vanderbilt University, and the Oak Ridge National Laboratory. The objective of this Center is to carry out forefront research in computationally intensive areas of atomic, nuclear, particle, and condensed matter physics. An important part of this effort is the appropriate training of students. An early effort of this Center was to conduct a Summer School of Computational Atomic and Nuclear Physics. A distinguished faculty of scientists in atomic, nuclear, and particle physics gave lectures on the status of present understanding of a number of topics at the leading edge in these fields, and emphasized those areas where computational physics was in a position to make a major contribution. In addition, there were lectures on numerical techniques which are particularly appropriate for implementation on parallel processor computers and which are of wide applicability in many branches of science

  14. DATA CENTER REMODELING FOR THE INTERNET OF THINGS

    Directory of Open Access Journals (Sweden)

    Cristian IVĂNUŞ

    2015-05-01

    Full Text Available Designing efficient data center is more than ever a challenge for many companies when it comes to meet the requirements of having a greater and extensible computing capacity. It should be stored increasing volumes of data. Applications became increasingly complex. The requirements for running business operations should be very flexible. If we want that ICT infrastructure (Information Technology & Communication to be able to offer, continuosly, high level of services, it is essential to rethink the "robustness" and the need for innovation in data centers. This is true for both servers and storage as well as for the processing power. Companies operating in this field help customers to optimize their data center availability and security by evaluating the energy consumption, cooling capacity and other factors involved in data centers upgrading. The objective is to obtain an optimum in terms of occupied space, most efficient power consumption and most effective cooling in order to achieve a sustainable long term data center operation. The development and modernization explosive data centers as a result to the three major trends existent in the IT (Information Technology now: cloud computing, Internet of Things and Big Data [1, 9].

  15. Computer Technology for Industry

    Science.gov (United States)

    1979-01-01

    In this age of the computer, more and more business firms are automating their operations for increased efficiency in a great variety of jobs, from simple accounting to managing inventories, from precise machining to analyzing complex structures. In the interest of national productivity, NASA is providing assistance both to longtime computer users and newcomers to automated operations. Through a special technology utilization service, NASA saves industry time and money by making available already developed computer programs which have secondary utility. A computer program is essentially a set of instructions which tells the computer how to produce desired information or effect by drawing upon its stored input. Developing a new program from scratch can be costly and time-consuming. Very often, however, a program developed for one purpose can readily be adapted to a totally different application. To help industry take advantage of existing computer technology, NASA operates the Computer Software Management and Information Center (COSMIC)(registered TradeMark),located at the University of Georgia. COSMIC maintains a large library of computer programs developed for NASA, the Department of Defense, the Department of Energy and other technology-generating agencies of the government. The Center gets a continual flow of software packages, screens them for adaptability to private sector usage, stores them and informs potential customers of their availability.

  16. USERDA computer program summaries. Numbers 177--239

    International Nuclear Information System (INIS)

    1975-10-01

    Since 1960 the Argonne Code Center has served as a U. S. Atomic Energy Commission information center for computer programs developed and used primarily for the solution of problems in nuclear physics, reactor design, reactor engineering and operation. The Center, through a network of registered installations, collects, validates, maintains, and distributes a library of these computer programs and publishes a compilation of abstracts describing them. In 1972 the scope of the Center's activities was officially expanded to include computer programs developed in all of the U. S. Atomic Energy Commission program areas and the compilation and publication of this report. The Computer Program Summary report contains summaries of computer programs at the specification stage, under development, being checked out, in use, or available at ERDA offices, laboratories, and contractor installations. Programs are divided into the following categories: cross section and resonance integral calculations; spectrum calculations, generation of group constants, lattice and cell problems; static design studies; depletion, fuel management, cost analysis, and reactor economics; space-independent kinetics; space--time kinetics, coupled neutronics--hydrodynamics--thermodynamics and excursion simulations; radiological safety, hazard and accident analysis; heat transfer and fluid flow; deformation and stress distribution computations, structural analysis and engineering design studies; gamma heating and shield design programs; reactor systems analysis; data preparation; data management; subsidiary calculations; experimental data processing; general mathematical and computing system routines; materials; environmental and earth sciences; space sciences; electronics and engineering equipment; chemistry; particle accelerators and high-voltage machines; physics; controlled thermonuclear research; biology and medicine; and data

  17. Computer Operating System Maintenance.

    Science.gov (United States)

    1982-06-01

    FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access

  18. Computation for LHC experiments: a worldwide computing grid

    International Nuclear Information System (INIS)

    Fairouz, Malek

    2010-01-01

    In normal operating conditions the LHC detectors are expected to record about 10 10 collisions each year. The processing of all the consequent experimental data is a real computing challenge in terms of equipment, software and organization: it requires sustaining data flows of a few 10 9 octets per second and recording capacity of a few tens of 10 15 octets each year. In order to meet this challenge a computing network implying the dispatch and share of tasks, has been set. The W-LCG grid (World wide LHC computing grid) is made up of 4 tiers. Tiers 0 is the computer center in CERN, it is responsible for collecting and recording the raw data from the LHC detectors and to dispatch it to the 11 tiers 1. The tiers 1 is typically a national center, it is responsible for making a copy of the raw data and for processing it in order to recover relevant data with a physical meaning and to transfer the results to the 150 tiers 2. The tiers 2 is at the level of the Institute or laboratory, it is in charge of the final analysis of the data and of the production of the simulations. Tiers 3 are at the level of the laboratories, they provide a complementary and local resource to tiers 2 in terms of data analysis. (A.C.)

  19. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  20. Center conditions and limit cycles for BiLienard systems

    Directory of Open Access Journals (Sweden)

    Jaume Gine

    2017-03-01

    Full Text Available In this article we study the center problem for polynomial BiLienard systems of degree n. Computing the focal values and using Grobner bases we find the center conditions for such systems for n=6. We also establish a conjecture about the center conditions for polynomial BiLienard systems of arbitrary degree.

  1. The GLOBE-Consortium: The Erasmus Computing Grid – Building a Super-Computer at Erasmus MC for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    2005-01-01

    textabstractTo meet the enormous computational needs of live-science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing grids in the world – The Erasmus Computing Grid.

  2. Annual report of R and D activities in Center for Computational Science and e-Systems from April 1, 2007 to March 31, 2009

    International Nuclear Information System (INIS)

    2010-01-01

    This report provides an overview of research and development activities in Center for Computational Science and e-Systems (CCSE), JAEA, during the fiscal years 2007 and 2008 (Apr 1, 2007 - March 31, 2009). These research and development activities have been performed by the Simulation Technology R and D Office and Computer Science R and D Office. These activities include development of secure computational infrastructure for atomic energy research based on the grid technology, large scale seismic analysis of an entire nuclear reactor structure, large scale fluid dynamics simulation of J-PARC mercury target, large scale plasma simulation for nuclear fusion reactor, large scale atomic and subatomic simulations of nuclear fuels and materials for safety assessment, large scale quantum simulations of superconductor for the design of new devices and fundamental understanding of superconductivity, development of protein database for the identification of radiation-resistance gene, and large scale atomic simulation of proteins. (author)

  3. Analytic reducibility of nondegenerate centers: Cherkas systems

    Directory of Open Access Journals (Sweden)

    Jaume Giné

    2016-07-01

    where $P_i(x$ are polynomials of degree $n$, $P_0(0=0$ and $P_0'(0 <0$. Computing the focal values we find the center conditions for such systems for degree $3$, and using modular arithmetics for degree $4$. Finally we do a conjecture about the center conditions for Cherkas polynomial differential systems of degree $n$.

  4. Cloud Computing as Evolution of Distributed Computing – A Case Study for SlapOS Distributed Cloud Computing Platform

    Directory of Open Access Journals (Sweden)

    George SUCIU

    2013-01-01

    Full Text Available The cloud computing paradigm has been defined from several points of view, the main two directions being either as an evolution of the grid and distributed computing paradigm, or, on the contrary, as a disruptive revolution in the classical paradigms of operating systems, network layers and web applications. This paper presents a distributed cloud computing platform called SlapOS, which unifies technologies and communication protocols into a new technology model for offering any application as a service. Both cloud and distributed computing can be efficient methods for optimizing resources that are aggregated from a grid of standard PCs hosted in homes, offices and small data centers. The paper fills a gap in the existing distributed computing literature by providing a distributed cloud computing model which can be applied for deploying various applications.

  5. Development of an Instrument to Measure Health Center (HC) Personnel's Computer Use, Knowledge and Functionality Demand for HC Computerized Information System in Thailand

    OpenAIRE

    Kijsanayotin, Boonchai; Pannarunothai, Supasit; Speedie, Stuart

    2005-01-01

    Knowledge about socio-technical aspects of information technology (IT) is vital for the success of health IT projects. The Thailand health administration anticipates using health IT to support the recently implemented national universal health care system. However, the national knowledge associate with the socio-technical aspects of health IT has not been studied in Thailand. A survey instrument measuring Thai health center (HC) personnel’s computer use, basic IT knowledge a...

  6. Cloud Computing

    DEFF Research Database (Denmark)

    Krogh, Simon

    2013-01-01

    with technological changes, the paradigmatic pendulum has swung between increased centralization on one side and a focus on distributed computing that pushes IT power out to end users on the other. With the introduction of outsourcing and cloud computing, centralization in large data centers is again dominating...... the IT scene. In line with the views presented by Nicolas Carr in 2003 (Carr, 2003), it is a popular assumption that cloud computing will be the next utility (like water, electricity and gas) (Buyya, Yeo, Venugopal, Broberg, & Brandic, 2009). However, this assumption disregards the fact that most IT production......), for instance, in establishing and maintaining trust between the involved parties (Sabherwal, 1999). So far, research in cloud computing has neglected this perspective and focused entirely on aspects relating to technology, economy, security and legal questions. While the core technologies of cloud computing (e...

  7. Center for Coastline Security Technology, Year-2

    National Research Council Canada - National Science Library

    Glegg, Stewart; Glenn, William; Furht, Borko; Beaujean, P. P; Frisk, G; Schock, S; VonEllenrieder, K; Ananthakrishnan, P; An, E; Granata, R

    2007-01-01

    ...), the Imaging Technology Center, the Department of Computer Science and Engineering, and the University Consortium for Intermodal Transportation Safety and Security at Florida Atlantic University...

  8. How to Bill Your Computer Services.

    Science.gov (United States)

    Dooskin, Herbert P.

    1981-01-01

    A computer facility billing procedure should be designed so that the full costs of a computer center operation are equitably charged to the users. Design criteria, costing methods, and management's role are discussed. (Author/MLF)

  9. Outline of Toshiba Business Information Center

    Science.gov (United States)

    Nagata, Yoshihiro

    Toshiba Business Information Center gathers and stores inhouse and external business information used in common within the Toshiba Corp., and provides companywide circulation, reference and other services. The Center established centralized information management system by employing decentralized computers, electronic file apparatus (30cm laser disc) and other office automation equipments. Online retrieval through LAN is available to search the stored documents and increasing copying requests are processed by electronic file. This paper describes the purpose of establishment of the Center, the facilities, management scheme, systematization of the files and the present situation and plan of each information service.

  10. Colorado Learning Disabilities Research Center.

    Science.gov (United States)

    DeFries, J. C.; And Others

    1997-01-01

    Results obtained from the center's six research projects are reviewed, including research on psychometric assessment of twins with reading disabilities, reading and language processes, attention deficit-hyperactivity disorder and executive functions, linkage analysis and physical mapping, computer-based remediation of reading disabilities, and…

  11. Annual report of R and D activities in Center for Computational Science and e-Systems from April 1, 2009 to March 31, 2010

    International Nuclear Information System (INIS)

    2011-10-01

    This report overviews the activity of research and development (R and D) in Center for Computational Science and e-Systems (CCSE) of the Japan Atomic Energy Agency (JAEA), during the fiscal year 2009 (April 1, 2009 - March 31, 2010). The work has been accomplished by the Simulation Technology R and D Office and Computer Science R and D Office in CCSE. The activity includes researches of secure computational infrastructure for the use in atomic energy research, which is based on the grid technology, a seismic response analysis for the structure of nuclear power plants, materials science, and quantum bioinformatics. The materials science research includes large scale atomic and subatomic simulations of nuclear fuels and materials for safety assessment, large scale quantum simulations of superconductor for the design of new devices and fundamental understanding of superconductivity. The quantum bioinformatics research focuses on the development of technology for large scale atomic simulations of proteins. (author)

  12. 78 FR 39730 - Privacy Act of 1974; CMS Computer Match No. 2013-11; HHS Computer Match No. 1302

    Science.gov (United States)

    2013-07-02

    ... 1974; CMS Computer Match No. 2013-11; HHS Computer Match No. 1302 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS). ACTION: Notice of Computer Matching... notice announces the establishment of a CMP that CMS intends to conduct with State-based Administering...

  13. NASA Langley Research Center outreach in astronautical education

    Science.gov (United States)

    Duberg, J. E.

    1976-01-01

    The Langley Research Center has traditionally maintained an active relationship with the academic community, especially at the graduate level, to promote the Center's research program and to make graduate education available to its staff. Two new institutes at the Center - the Joint Institute for Acoustics and Flight Sciences, and the Institute for Computer Applications - are discussed. Both provide for research activity at the Center by university faculties. The American Society of Engineering Education Summer Faculty Fellowship Program and the NASA-NRC Postdoctoral Resident Research Associateship Program are also discussed.

  14. USERDA computer software summaries: numbers 240 through 324

    International Nuclear Information System (INIS)

    1976-12-01

    Since 1960 the Argonne Code Center has served as a U.S. Atomic Energy Commission information center for computer programs developed and used primarily for the solution of problems in nuclear physics, reactor design, reactor engineering and operation. The Center, through a network of registered installations, collects, validates, maintains, and distributes a library of these computer programs and publishes a compilation of abstracts describing them. In 1972 the scope of the Center's activities was officially expanded to include computer programs developed in all of the U.S. Atomic Energy Commission program areas and the compilation and publicatuon of this report. The Computer Software Summary report contains summaries of computer programs at the specification stage, under development, being checked out, in use, or available at ERDA offices, laboratories, and contractor installations. Programs are divided into the following categories : cross section and resonance integral calculations; spectrum calculations, generation of group constants, lattice and cell problems; static design studies; depletion, fuel management, cost analysis, and reactor economics; space-independent k;inetics; pace--time kinetics, coupled neutronics--hydrodynamics--thermodynmics and excursion simulations; radiological safety, hazard and accident analysis; heat transfer and fluid flow; deformation and stress distribution computations, structural analysis and engineering design studies; gamma heating and shielddesign programs; reactor systems analysis; data preparation; data management; subsidiary calculations; experimental data processing; general mathematical and computing system routines; materials; environmental and earth sciences; space sciences; electronics and engineering equipment; chemistry; particle accelerators and high-voltage machines; physics; controlled thermonuclear research; biology and medicine; and data

  15. Computer Training for Seniors: An Academic-Community Partnership

    Science.gov (United States)

    Sanders, Martha J.; O'Sullivan, Beth; DeBurra, Katherine; Fedner, Alesha

    2013-01-01

    Computer technology is integral to information retrieval, social communication, and social interaction. However, only 47% of seniors aged 65 and older use computers. The purpose of this study was to determine the impact of a client-centered computer program on computer skills, attitudes toward computer use, and generativity in novice senior…

  16. Air flow management in raised floor data centers

    CERN Document Server

    Arghode, Vaibhav K

    2016-01-01

    The Brief discuss primarily two aspects of air flow management in raised floor data centers. Firstly, cooling air delivery through perforated tiles will be examined and influence of the tile geometry on flow field development and hot air entrainment above perforated tiles will be discussed. Secondly, the use of cold aisle containment to physically separate hot and cold regions, and minimize hot and cold air mixing will be presented. Both experimental investigations and computational efforts are discussed and development of computational fluid dynamics (CFD) based models for simulating air flow in data centers is included. In addition, metrology tools for facility scale air velocity and temperature measurement, and air flow rate measurement through perforated floor tiles and server racks are examined and the authors present thermodynamics-based models to gauge the effectiveness and importance of air flow management schemes in data centers.

  17. Ways to increase the effectiveness of using computers and machine programs

    Energy Technology Data Exchange (ETDEWEB)

    Bulgakov, R T; Bagautdinov, G M; Kovalenko, Yu M

    1979-01-01

    An analysis is conducted of the statistical data about the operation of the computers of the computer center of the Tatar Scientific Research and Design Institute for Oil. Exposing the reasons which impact on the effectiveness of the use of the computers and the machine programs through an expert questionnaire, an ''effectiveness tree'' is compiled. Formulated are organizational measures for the executor (the computer center), the user and management and the senior leadership, which are required in order to successfully use the computers.

  18. The new MCNP6 depletion capability

    International Nuclear Information System (INIS)

    Fensin, M. L.; James, M. R.; Hendricks, J. S.; Goorley, J. T.

    2012-01-01

    The first MCNP based in-line Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology. (authors)

  19. The New MCNP6 Depletion Capability

    International Nuclear Information System (INIS)

    Fensin, Michael Lorne; James, Michael R.; Hendricks, John S.; Goorley, John T.

    2012-01-01

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology.

  20. 78 FR 50419 - Privacy Act of 1974; CMS Computer Match No. 2013-10; HHS Computer Match No. 1310

    Science.gov (United States)

    2013-08-19

    ... 1974; CMS Computer Match No. 2013-10; HHS Computer Match No. 1310 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS). ACTION: Notice of Computer Matching... notice announces the establishment of a CMP that CMS plans to conduct with the Department of Homeland...

  1. Computer applications in controlled fusion research

    International Nuclear Information System (INIS)

    Killeen, J.

    1975-02-01

    The role of Nuclear Engineering Education in the application of computers to controlled fusion research can be a very important one. In the near future the use of computers in the numerical modelling of fusion systems should increase substantially. A recent study group has identified five categories of computational models to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies are called for. The development and application of computer codes to implement these models is a vital step in reaching the goal of fusion power. In order to meet the needs of the fusion program the National CTR Computer Center has been established at the Lawrence Livermore Laboratory. A large central computing facility is linked to smaller computing centers at each of the major CTR laboratories by a communications network. The crucial element that is needed for success is trained personnel. The number of people with knowledge of plasma science and engineering that are trained in numerical methods and computer science is quite small, and must be increased substantially in the next few years. Nuclear Engineering departments should encourage students to enter this field and provide the necessary courses and research programs in fusion computing. (U.S.)

  2. Diamond NV centers for quantum computing and quantum networks

    NARCIS (Netherlands)

    Childress, L.; Hanson, R.

    2013-01-01

    The exotic features of quantum mechanics have the potential to revolutionize information technologies. Using superposition and entanglement, a quantum processor could efficiently tackle problems inaccessible to current-day computers. Nonlocal correlations may be exploited for intrinsically secure

  3. Management Needs for Computer Support.

    Science.gov (United States)

    Irby, Alice J.

    University management has many and varied needs for effective computer services in support of their processing and information functions. The challenge for the computer center managers is to better understand these needs and assist in the development of effective and timely solutions. Management needs can range from accounting and payroll to…

  4. Security and Privacy in Fog Computing: Challenges

    OpenAIRE

    Mukherjee, Mithun; Matam, Rakesh; Shu, Lei; Maglaras, Leandros; Ferrag, Mohamed Amine; Choudhry, Nikumani; Kumar, Vikas

    2017-01-01

    open access article Fog computing paradigm extends the storage, networking, and computing facilities of the cloud computing toward the edge of the networks while offloading the cloud data centers and reducing service latency to the end users. However, the characteristics of fog computing arise new security and privacy challenges. The existing security and privacy measurements for cloud computing cannot be directly applied to the fog computing due to its features, such as mobility, heteroge...

  5. Planning for the Automation of School Library Media Centers.

    Science.gov (United States)

    Caffarella, Edward P.

    1996-01-01

    Geared for school library media specialists whose centers are in the early stages of automation or conversion to a new system, this article focuses on major components of media center automation: circulation control; online public access catalogs; machine readable cataloging; retrospective conversion of print catalog cards; and computer networks…

  6. Computer-Aided Corrosion Program Management

    Science.gov (United States)

    MacDowell, Louis

    2010-01-01

    This viewgraph presentation reviews Computer-Aided Corrosion Program Management at John F. Kennedy Space Center. The contents include: 1) Corrosion at the Kennedy Space Center (KSC); 2) Requirements and Objectives; 3) Program Description, Background and History; 4) Approach and Implementation; 5) Challenges; 6) Lessons Learned; 7) Successes and Benefits; and 8) Summary and Conclusions.

  7. [Computer-aided prescribing: from utopia to reality].

    Science.gov (United States)

    Suárez-Varela Ubeda, J; Beltrán Calvo, C; Molina López, T; Navarro Marín, P

    2005-05-31

    To determine whether the introduction of computer-aided prescribing helped reduce the administrative burden at primary care centers. Descriptive, cross-sectional design. Torreblanca Health Center in the province of Seville, southern Spain. From 29 October 2003 to the present a pilot project involving nine pharmacies in the basic health zone served by this health center has been running to evaluate computer-aided prescribing (the Receta XXI project) with real patients. All patients on the center's list of patients who came to the center for an administrative consultation to renew prescriptions for medications or supplies for long-term treatment. Total number of administrative visits per patient for patients who came to the center to renew prescriptions for long-term treatment, as recorded by the Diraya system (Historia Clinica Digital del Ciudadano, or Citizen's Digital Medical Record) during the period from February to July 2004. Total number of the same type of administrative visits recorded by the previous system (TASS) during the period from February to July 2003. The mean number of administrative visits per month during the period from February to July 2003 was 160, compared to a mean number of 64 visits during the period from February to July 2004. The reduction in the number of visits for prescription renewal was 60%. Introducing a system for computer-aided prescribing significantly reduced the number of administrative visits for prescription renewal for long-term treatment. This could help reduce the administrative burden considerably in primary care if the system were used in all centers.

  8. National Energy Research Scientific Computing Center 2007 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Hules, John A.; Bashor, Jon; Wang, Ucilia; Yarris, Lynn; Preuss, Paul

    2008-10-23

    This report presents highlights of the research conducted on NERSC computers in a variety of scientific disciplines during the year 2007. It also reports on changes and upgrades to NERSC's systems and services aswell as activities of NERSC staff.

  9. Information center as a technical institute unifying a user community

    International Nuclear Information System (INIS)

    Maskewitz, B.F.; McGill, B.; Hatmaker, N.A.

    1976-01-01

    The historical background to the information analysis center concept is presented first. The Radiation Shielding Information Center (RSIC) at ORNL is cited as an example of the information analysis center. RSIC objectives and scope are described, and RSIC's role in unification of the field of shielding is discussed. Some problems in handling information exchange with respect to computer codes are examined

  10. Enhanced Survey and Proposal to secure the data in Cloud Computing Environment

    OpenAIRE

    MR.S.SUBBIAH; DR.S.SELVA MUTHUKUMARAN; DR.T.RAMKUMAR

    2013-01-01

    Cloud computing have the power to eliminate the cost of setting high end computing infrastructure. It is a promising area or design to give very flexible architecture, accessible through the internet. In the cloud computing environment the data will be reside at any of the data centers. Due to that, some data center may leak the data stored on there, beyond the reach and control of the users. For this kind of misbehaving data centers, the service providers should take care of the security and...

  11. Astigmatic single photon emission computed tomography imaging with a displaced center of rotation

    International Nuclear Information System (INIS)

    Wang, H.; Smith, M.F.; Stone, C.D.; Jaszczak, R.J.

    1998-01-01

    A filtered backprojection algorithm is developed for single photon emission computed tomography (SPECT) imaging with an astigmatic collimator having a displaced center of rotation. The astigmatic collimator has two perpendicular focal lines, one that is parallel to the axis of rotation of the gamma camera and one that is perpendicular to this axis. Using SPECT simulations of projection data from a hot rod phantom and point source arrays, it is found that a lack of incorporation of the mechanical shift in the reconstruction algorithm causes errors and artifacts in reconstructed SPECT images. The collimator and acquisition parameters in the astigmatic reconstruction formula, which include focal lengths, radius of rotation, and mechanical shifts, are often partly unknown and can be determined using the projections of a point source at various projection angles. The accurate determination of these parameters by a least squares fitting technique using projection data from numerically simulated SPECT acquisitions is studied. These studies show that the accuracy of parameter determination is improved as the distance between the point source and the axis of rotation of the gamma camera is increased. The focal length to the focal line perpendicular to the axis of rotation is determined more accurately than the focal length to the focal line parallel to this axis. copyright 1998 American Association of Physicists in Medicine

  12. The BaBar experiment's distributed computing model

    International Nuclear Information System (INIS)

    Boutigny, D.

    2001-01-01

    In order to face the expected increase in statistics between now and 2005, the BaBar experiment at SLAC is evolving its computing model toward a distributed multitier system. It is foreseen that data will be spread among Tier-A centers and deleted from the SLAC center. A uniform computing environment is being deployed in the centers, the network bandwidth is continuously increased and data distribution tools has been designed in order to reach a transfer rate of ∼100 TB of data per year. In parallel, smaller Tier-B and C sites receive subsets of data, presently in Kanga-ROOT format and later in Objectivity format. GRID tools will be used for remote job submission

  13. The BaBar Experiment's Distributed Computing Model

    International Nuclear Information System (INIS)

    Gowdy, Stephen J.

    2002-01-01

    In order to face the expected increase in statistics between now and 2005, the BaBar experiment at SLAC is evolving its computing model toward a distributed multi-tier system. It is foreseen that data will be spread among Tier-A centers and deleted from the SLAC center. A uniform computing environment is being deployed in the centers, the network bandwidth is continuously increased and data distribution tools has been designed in order to reach a transfer rate of ∼100 TB of data per year. In parallel, smaller Tier-B and C sites receive subsets of data, presently in Kanga-ROOT[1] format and later in Objectivity[2] format. GRID tools will be used for remote job submission

  14. Comparative Analysis of Canal Centering Ability of Different Single File Systems Using Cone Beam Computed Tomography- An In-Vitro Study.

    Science.gov (United States)

    Agarwal, Rolly S; Agarwal, Jatin; Jain, Pradeep; Chandra, Anil

    2015-05-01

    The ability of an endodontic instrument to remain centered in the root canal system is one of the most important characteristic influencing the clinical performance of a particular file system. Thus, it is important to assess the canal centering ability of newly introduced single file systems before they can be considered a viable replacement of full-sequence rotary file systems. The aim of the study was to compare the canal transportation, centering ability, and time taken for preparation of curved root canals after instrumentation with single file systems One Shape and Wave One, using cone-beam computed tomography (CBCT). Sixty mesiobuccal canals of mandibular molars with an angle of curvature ranging from 20(o) to 35(o) were divided into three groups of 20 samples each: ProTaper PT (group I) - full-sequence rotary control group, OneShape OS (group II)- single file continuous rotation, WaveOne WO - single file reciprocal motion (group III). Pre instrumentation and post instrumentation three-dimensional CBCT images were obtained from root cross-sections at 3mm, 6mm and 9mm from the apex. Scanned images were then accessed to determine canal transportation and centering ability. The data collected were evaluated using one-way analysis of variance (ANOVA) with Tukey's honestly significant difference test. It was observed that there were no differences in the magnitude of transportation between the rotary instruments (p >0.05) at both 3mm as well as 6mm from the apex. At 9 mm from the apex, Group I PT showed significantly higher mean canal transportation and lower centering ability (0.19±0.08 and 0.39±0.16), as compared to Group II OS (0.12±0.07 and 0.54±0.24) and Group III WO (0.13±0.06 and 0.55±0.18) while the differences between OS and WO were not statistically significant. It was concluded that there was minor difference between the tested groups. Single file systems demonstrated average canal transportation and centering ability comparable to full sequence

  15. Comparative Analysis of Canal Centering Ability of Different Single File Systems Using Cone Beam Computed Tomography- An In-Vitro Study

    Science.gov (United States)

    Agarwal, Jatin; Jain, Pradeep; Chandra, Anil

    2015-01-01

    Background The ability of an endodontic instrument to remain centered in the root canal system is one of the most important characteristic influencing the clinical performance of a particular file system. Thus, it is important to assess the canal centering ability of newly introduced single file systems before they can be considered a viable replacement of full-sequence rotary file systems. Aim The aim of the study was to compare the canal transportation, centering ability, and time taken for preparation of curved root canals after instrumentation with single file systems One Shape and Wave One, using cone-beam computed tomography (CBCT). Materials and Methods Sixty mesiobuccal canals of mandibular molars with an angle of curvature ranging from 20o to 35o were divided into three groups of 20 samples each: ProTaper PT (group I) – full-sequence rotary control group, OneShape OS (group II)- single file continuous rotation, WaveOne WO – single file reciprocal motion (group III). Pre instrumentation and post instrumentation three-dimensional CBCT images were obtained from root cross-sections at 3mm, 6mm and 9mm from the apex. Scanned images were then accessed to determine canal transportation and centering ability. The data collected were evaluated using one-way analysis of variance (ANOVA) with Tukey’s honestly significant difference test. Results It was observed that there were no differences in the magnitude of transportation between the rotary instruments (p >0.05) at both 3mm as well as 6mm from the apex. At 9 mm from the apex, Group I PT showed significantly higher mean canal transportation and lower centering ability (0.19±0.08 and 0.39±0.16), as compared to Group II OS (0.12±0.07 and 0.54±0.24) and Group III WO (0.13±0.06 and 0.55±0.18) while the differences between OS and WO were not statistically significant Conclusion It was concluded that there was minor difference between the tested groups. Single file systems demonstrated average canal

  16. ATLAS computing activities and developments in the Italian Grid cloud

    International Nuclear Information System (INIS)

    Rinaldi, L; Ciocca, C; K, M; Annovi, A; Antonelli, M; Martini, A; Barberis, D; Brunengo, A; Corosu, M; Barberis, S; Carminati, L; Campana, S; Di, A; Capone, V; Carlino, G; Doria, A; Esposito, R; Merola, L; De, A; Luminari, L

    2012-01-01

    The large amount of data produced by the ATLAS experiment needs new computing paradigms for data processing and analysis, which involve many computing centres spread around the world. The computing workload is managed by regional federations, called “clouds”. The Italian cloud consists of a main (Tier-1) center, located in Bologna, four secondary (Tier-2) centers, and a few smaller (Tier-3) sites. In this contribution we describe the Italian cloud facilities and the activities of data processing, analysis, simulation and software development performed within the cloud, and we discuss the tests of the new computing technologies contributing to evolution of the ATLAS Computing Model.

  17. Canal transportation and centering ability of protaper and self-adjusting file system in long oval canals: An ex-vivo cone-beam computed tomography analysis.

    Science.gov (United States)

    Shah, Dipali Yogesh; Wadekar, Swati Ishwara; Dadpe, Ashwini Manish; Jadhav, Ganesh Ranganath; Choudhary, Lalit Jayant; Kalra, Dheeraj Deepak

    2017-01-01

    The purpose of this study was to compare and evaluate the shaping ability of ProTaper (PT) and Self-Adjusting File (SAF) system using cone-beam computed tomography (CBCT) to assess their performance in oval-shaped root canals. Sixty-two mandibular premolars with single oval canals were divided into two experimental groups ( n = 31) according to the systems used: Group I - PT and Group II - SAF. Canals were evaluated before and after instrumentation using CBCT to assess centering ratio and canal transportation at three levels. Data were statistically analyzed using one-way analysis of variance, post hoc Tukey's test, and t -test. The SAF showed better centering ability and lesser canal transportation than the PT only in the buccolingual plane at 6 and 9 mm levels. The shaping ability of the PT was best in the apical third in both the planes. The SAF had statistically significant better centering and lesser canal transportation in the buccolingual as compared to the mesiodistal plane at the middle and coronal levels. The SAF produced significantly less transportation and remained centered than the PT at the middle and coronal levels in the buccolingual plane of oval canals. In the mesiodistal plane, the performance of both the systems was parallel.

  18. Comparison of canal transportation and centering ability of twisted files, Pathfile-ProTaper system, and stainless steel hand K-files by using computed tomography.

    Science.gov (United States)

    Gergi, Richard; Rjeily, Joe Abou; Sader, Joseph; Naaman, Alfred

    2010-05-01

    The purpose of this study was to compare canal transportation and centering ability of 2 rotary nickel-titanium (NiTi) systems (Twisted Files [TF] and Pathfile-ProTaper [PP]) with conventional stainless steel K-files. Ninety root canals with severe curvature and short radius were selected. Canals were divided randomly into 3 groups of 30 each. After preparation with TF, PP, and stainless steel files, the amount of transportation that occurred was assessed by using computed tomography. Three sections from apical, mid-root, and coronal levels of the canal were recorded. Amount of transportation and centering ability were assessed. The 3 groups were statistically compared with analysis of variance and Tukey honestly significant difference test. Less transportation and better centering ability occurred with TF rotary instruments (P < .0001). K-files showed the highest transportation followed by PP system. PP system showed significant transportation when compared with TF (P < .0001). The TF system was found to be the best for all variables measured in this study. Copyright (c) 2010 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  19. Exploring the Relationships between Self-Efficacy and Preference for Teacher Authority among Computer Science Majors

    Science.gov (United States)

    Lin, Che-Li; Liang, Jyh-Chong; Su, Yi-Ching; Tsai, Chin-Chung

    2013-01-01

    Teacher-centered instruction has been widely adopted in college computer science classrooms and has some benefits in training computer science undergraduates. Meanwhile, student-centered contexts have been advocated to promote computer science education. How computer science learners respond to or prefer the two types of teacher authority,…

  20. Performance of Cloud Computing Centers with Multiple Priority Classes

    NARCIS (Netherlands)

    Ellens, W.; Zivkovic, Miroslav; Akkerboom, J.; Litjens, R.; van den Berg, Hans Leo

    In this paper we consider the general problem of resource provisioning within cloud computing. We analyze the problem of how to allocate resources to different clients such that the service level agreements (SLAs) for all of these clients are met. A model with multiple service request classes

  1. Whatever works: a systematic user-centered training protocol to optimize brain-computer interfacing individually.

    Directory of Open Access Journals (Sweden)

    Elisabeth V C Friedrich

    Full Text Available This study implemented a systematic user-centered training protocol for a 4-class brain-computer interface (BCI. The goal was to optimize the BCI individually in order to achieve high performance within few sessions for all users. Eight able-bodied volunteers, who were initially naïve to the use of a BCI, participated in 10 sessions over a period of about 5 weeks. In an initial screening session, users were asked to perform the following seven mental tasks while multi-channel EEG was recorded: mental rotation, word association, auditory imagery, mental subtraction, spatial navigation, motor imagery of the left hand and motor imagery of both feet. Out of these seven mental tasks, the best 4-class combination as well as most reactive frequency band (between 8-30 Hz was selected individually for online control. Classification was based on common spatial patterns and Fisher's linear discriminant analysis. The number and time of classifier updates varied individually. Selection speed was increased by reducing trial length. To minimize differences in brain activity between sessions with and without feedback, sham feedback was provided in the screening and calibration runs in which usually no real-time feedback is shown. Selected task combinations and frequency ranges differed between users. The tasks that were included in the 4-class combination most often were (1 motor imagery of the left hand (2, one brain-teaser task (word association or mental subtraction (3, mental rotation task and (4 one more dynamic imagery task (auditory imagery, spatial navigation, imagery of the feet. Participants achieved mean performances over sessions of 44-84% and peak performances in single-sessions of 58-93% in this user-centered 4-class BCI protocol. This protocol is highly adjustable to individual users and thus could increase the percentage of users who can gain and maintain BCI control. A high priority for future work is to examine this protocol with severely

  2. Cloud Computing Security

    OpenAIRE

    Ngongang, Guy

    2011-01-01

    This project aimed to show how possible it is to use a network intrusion detection system in the cloud. The security in the cloud is a concern nowadays and security professionals are still finding means to make cloud computing more secure. First of all the installation of the ESX4.0, vCenter Server and vCenter lab manager in server hardware was successful in building the platform. This allowed the creation and deployment of many virtual servers. Those servers have operating systems and a...

  3. Dimensioning storage and computing clusters for efficient high throughput computing

    International Nuclear Information System (INIS)

    Accion, E; Bria, A; Bernabeu, G; Caubet, M; Delfino, M; Espinal, X; Merino, G; Lopez, F; Martinez, F; Planas, E

    2012-01-01

    Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.

  4. SciDAC Visualization and Analytics Center for Enabling Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Joy, Kenneth I. [Univ. of California, Davis, CA (United States)

    2014-09-14

    This project focuses on leveraging scientific visualization and analytics software technology as an enabling technology for increasing scientific productivity and insight. Advances in computational technology have resulted in an "information big bang," which in turn has created a significant data understanding challenge. This challenge is widely acknowledged to be one of the primary bottlenecks in contemporary science. The vision for our Center is to respond directly to that challenge by adapting, extending, creating when necessary and deploying visualization and data understanding technologies for our science stakeholders. Using an organizational model as a Visualization and Analytics Center for Enabling Technologies (VACET), we are well positioned to be responsive to the needs of a diverse set of scientific stakeholders in a coordinated fashion using a range of visualization, mathematics, statistics, computer and computational science and data management technologies.

  5. The 20 Tera flop Erasmus Computing Grid (ECG).

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing

  6. The 20 Tera flop Erasmus Computing Grid (ECG)

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2009-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing

  7. Computed tomography-guided percutaneous gastrostomy: initial experience at a cancer center

    International Nuclear Information System (INIS)

    Tyng, Chiang Jeng; Santos, Erich Frank Vater; Guerra, Luiz Felipe Alves; Bitencourt, Almir Galvao Vieira; Barbosa, Paula Nicole Vieira Pinto; Chojniak, Rubens; Universidade Federal do Espirito Santo

    2017-01-01

    Gastrostomy is indicated for patients with conditions that do not allow adequate oral nutrition. To reduce the morbidity and costs associated with the procedure, there is a trend toward the use of percutaneous gastrostomy, guided by endoscopy, fluoroscopy, or, most recently, computed tomography. The purpose of this paper was to review the computed tomography-guided gastrostomy procedure, as well as the indications for its use and the potential complications. (author)

  8. Computed tomography-guided percutaneous gastrostomy: initial experience at a cancer center

    Energy Technology Data Exchange (ETDEWEB)

    Tyng, Chiang Jeng; Santos, Erich Frank Vater; Guerra, Luiz Felipe Alves; Bitencourt, Almir Galvao Vieira; Barbosa, Paula Nicole Vieira Pinto; Chojniak, Rubens [A. C. Camargo Cancer Center, Sao Paulo, SP (Brazil); Universidade Federal do Espirito Santo (HUCAM/UFES), Vitoria, ES (Brazil). Hospital Universitario Cassiano Antonio de Morais. Radiologia e Diagnostico por Imagem

    2017-03-15

    Gastrostomy is indicated for patients with conditions that do not allow adequate oral nutrition. To reduce the morbidity and costs associated with the procedure, there is a trend toward the use of percutaneous gastrostomy, guided by endoscopy, fluoroscopy, or, most recently, computed tomography. The purpose of this paper was to review the computed tomography-guided gastrostomy procedure, as well as the indications for its use and the potential complications. (author)

  9. NASA Space Engineering Research Center for VLSI systems design

    Science.gov (United States)

    1991-01-01

    This annual review reports the center's activities and findings on very large scale integration (VLSI) systems design for 1990, including project status, financial support, publications, the NASA Space Engineering Research Center (SERC) Symposium on VLSI Design, research results, and outreach programs. Processor chips completed or under development are listed. Research results summarized include a design technique to harden complementary metal oxide semiconductors (CMOS) memory circuits against single event upset (SEU); improved circuit design procedures; and advances in computer aided design (CAD), communications, computer architectures, and reliability design. Also described is a high school teacher program that exposes teachers to the fundamentals of digital logic design.

  10. High speed switching for computer and communication networks

    NARCIS (Netherlands)

    Dorren, H.J.S.

    2014-01-01

    The role of data centers and computers are vital for the future of our data-centric society. Historically the performance of data-centers is increasing with a factor 100-1000 every ten years and as a result of this the capacity of the data-center communication network has to scale accordingly. This

  11. MCNP(trademark) Version 5

    International Nuclear Information System (INIS)

    Cox, Lawrence J.; Barrett, Richard F.; Booth, Thomas Edward; Briesmeister, Judith F.; Brown, Forrest B.; Bull, Jeffrey S.; Giesler, Gregg Carl; Goorley, John T.; Mosteller, Russell D.; Forster, R. Arthur; Post, Susan E.; Prael, Richard E.; Selcow, Elizabeth Carol; Sood, Avneet

    2002-01-01

    The Monte Carlo transport workhorse, MCNP, is undergoing a massive renovation at Los Alamos National Laboratory (LANL) in support of the Eolus Project of the Advanced Simulation and Computing (ASCI) Program. MCNP Version 5 (V5) (expected to be released to RSICC in Spring, 2002) will consist of a major restructuring from FORTRAN-77 (with extensions) to ANSI-standard FORTRAN-90 with support for all of the features available in the present release (MCNP-4C2/4C3). To most users, the look-and-feel of MCNP will not change much except for the improvements (improved graphics, easier installation, better online documentation). For example, even with the major format change, full support for incremental patching will still be provided. In addition to the language and style updates, MCNP V5 will have various new user features. These include improved photon physics, neutral particle radiography, enhancements and additions to variance reduction methods, new source options, and improved parallelism support (PVM, MPI, OpenMP).

  12. Computed tomography system

    International Nuclear Information System (INIS)

    Lambert, T.W.; Blake, J.E.

    1981-01-01

    This invention relates to computed tomography and is particularly concerned with determining the CT numbers of zones of interest in an image displayed on a cathode ray tube which zones lie in the so-called level or center of the gray scale window. (author)

  13. 69: Computers in radiotherapy - The Philippine perspective

    International Nuclear Information System (INIS)

    Rodriguez, L.V.; Sy Ortin, T.T.

    1987-01-01

    Malignant neoplasm ranks third among the killer diseases in the Philippines today. For the past five years, around 26,000 cases per year have been reported. In 1986, 27% of the total number of cases reported received radiation therapy. Individual treatment plans were made for 17% of these patients. A survey was conducted among the twelve radiation treatment centers in the country. Six of these centers are hoping to have treatment planning computers in the future. Financial constraints inhibit the acquisition of computers for radiotherapy use. At present, the authors have designed simple programs for use at the Cancer Control Center. Further development of treatment planning software that would meet the present needs of the local condition is being explored. 2 refs.; 2 figs.; 2 tabs

  14. PRIMARY SCHOOL PRINCIPALS’ ATTITUDES TOWARDS COMPUTER TECHNOLOGY IN THE USE OF COMPUTER TECHNOLOGY IN SCHOOL ADMINISTRATION

    OpenAIRE

    GÜNBAYI, İlhan; CANTÜRK, Gökhan

    2011-01-01

    The aim of the study is to determine the usage of computer technology in school administration, primary school administrators’ attitudes towards computer technology, administrators’ and teachers’ computer literacy level. The study was modeled as a survey search. The population of the study consists primary school principals, assistant principals in public primary schools in the center of Antalya. The data were collected from 161 (%51) administrator questionnaires in 68 of 129 public primary s...

  15. CILT2000: Ubiquitous Computing--Spanning the Digital Divide.

    Science.gov (United States)

    Tinker, Robert; Vahey, Philip

    2002-01-01

    Discusses the role of ubiquitous and handheld computers in education. Summarizes the contributions of the Center for Innovative Learning Technologies (CILT) and describes the ubiquitous computing sessions at the CILT2000 Conference. (Author/YDS)

  16. An accelerated line-by-line option for MODTRAN combining on-the-fly generation of line center absorption within 0.1 cm-1 bins and pre-computed line tails

    Science.gov (United States)

    Berk, Alexander; Conforti, Patrick; Hawes, Fred

    2015-05-01

    A Line-By-Line (LBL) option is being developed for MODTRAN6. The motivation for this development is two-fold. Firstly, when MODTRAN is validated against an independent LBL model, it is difficult to isolate the source of discrepancies. One must verify consistency between pressure, temperature and density profiles, between column density calculations, between continuum and particulate data, between spectral convolution methods, and more. Introducing a LBL option directly within MODTRAN will insure common elements for all calculations other than those used to compute molecular transmittances. The second motivation for the LBL upgrade is that it will enable users to compute high spectral resolution transmittances and radiances for the full range of current MODTRAN applications. In particular, introducing the LBL feature into MODTRAN will enable first-principle calculations of scattered radiances, an option that is often not readily available with LBL models. MODTRAN will compute LBL transmittances within one 0.1 cm-1 spectral bin at a time, marching through the full requested band pass. The LBL algorithm will use the highly accurate, pressure- and temperature-dependent MODTRAN Padé approximant fits of the contribution from line tails to define the absorption from all molecular transitions centered more than 0.05 cm-1 from each 0.1 cm-1 spectral bin. The beauty of this approach is that the on-the-fly computations for each 0.1 cm-1 bin will only require explicit LBL summing of transitions centered within a 0.2 cm-1 spectral region. That is, the contribution from the more distant lines will be pre-computed via the Padé approximants. The status of the LBL effort will be presented. This will include initial thermal and solar radiance calculations, validation calculations, and self-validations of the MODTRAN band model against its own LBL calculations.

  17. Security in cloud computing

    OpenAIRE

    Moreno Martín, Oriol

    2016-01-01

    Security in Cloud Computing is becoming a challenge for next generation Data Centers. This project will focus on investigating new security strategies for Cloud Computing systems. Cloud Computingisarecent paradigmto deliver services over Internet. Businesses grow drastically because of it. Researchers focus their work on it. The rapid access to exible and low cost IT resources on an on-demand fashion, allows the users to avoid planning ahead for provisioning, and enterprises to save money ...

  18. Computer Vision Syndrome and Associated Factors Among Medical ...

    African Journals Online (AJOL)

    among college students the effects of computer use on the eye and vision related problems. ... which included the basic demographic profile, hours of computer use per ..... Male was reported by Costa et al., among call center workers in. Brazil.[17]. Headache .... the use of computer had become universal in higher education.

  19. Dynamic integration of remote cloud resources into local computing clusters

    Energy Technology Data Exchange (ETDEWEB)

    Fleig, Georg; Erli, Guenther; Giffels, Manuel; Hauth, Thomas; Quast, Guenter; Schnepf, Matthias [Institut fuer Experimentelle Kernphysik, Karlsruher Institut fuer Technologie (Germany)

    2016-07-01

    In modern high-energy physics (HEP) experiments enormous amounts of data are analyzed and simulated. Traditionally dedicated HEP computing centers are built or extended to meet this steadily increasing demand for computing resources. Nowadays it is more reasonable and more flexible to utilize computing power at remote data centers providing regular cloud services to users as they can be operated in a more efficient manner. This approach uses virtualization and allows the HEP community to run virtual machines containing a dedicated operating system and transparent access to the required software stack on almost any cloud site. The dynamic management of virtual machines depending on the demand for computing power is essential for cost efficient operation and sharing of resources with other communities. For this purpose the EKP developed the on-demand cloud manager ROCED for dynamic instantiation and integration of virtualized worker nodes into the institute's computing cluster. This contribution will report on the concept of our cloud manager and the implementation utilizing a remote OpenStack cloud site and a shared HPC center (bwForCluster located in Freiburg).

  20. CY15 Livermore Computing Focus Areas

    Energy Technology Data Exchange (ETDEWEB)

    Connell, Tom M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Cupps, Kim C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); D' Hooge, Trent E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fahey, Tim J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fox, Dave M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Futral, Scott W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gary, Mark R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Goldstone, Robin J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hamilton, Pam G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Heer, Todd M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Long, Jeff W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mark, Rich J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Morrone, Chris J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shoopman, Jerry D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Slavec, Joe A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Smith, David W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Springmeyer, Becky R [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Stearman, Marc D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Watson, Py C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-01-20

    The LC team undertook a survey of primary Center drivers for CY15. Identified key drivers included enhancing user experience and productivity, pre-exascale platform preparation, process improvement, data-centric computing paradigms and business expansion. The team organized critical supporting efforts into three cross-cutting focus areas; Improving Service Quality; Monitoring, Automation, Delegation and Center Efficiency; and Next Generation Compute and Data Environments In each area the team detailed high level challenges and identified discrete actions to address these issues during the calendar year. Identifying the Center’s primary drivers, issues, and plans is intended to serve as a lens focusing LC personnel, resources, and priorities throughout the year.

  1. Center for Advanced Energy Studies: Computer Assisted Virtual Environment (CAVE)

    Data.gov (United States)

    Federal Laboratory Consortium — The laboratory contains a four-walled 3D computer assisted virtual environment - or CAVE TM — that allows scientists and engineers to literally walk into their data...

  2. Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2017

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard

    2014-05-02

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In March 2013, NERSC, DOE?s Office of Advanced Scientific Computing Research (ASCR) and DOE?s Office of Fusion Energy Sciences (FES) held a review to characterize High Performance Computing (HPC) and storage requirements for FES research through 2017. This report is the result.

  3. Computational Nanotechnology Molecular Electronics, Materials and Machines

    Science.gov (United States)

    Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    This presentation covers research being performed on computational nanotechnology, carbon nanotubes and fullerenes at the NASA Ames Research Center. Topics cover include: nanomechanics of nanomaterials, nanotubes and composite materials, molecular electronics with nanotube junctions, kinky chemistry, and nanotechnology for solid-state quantum computers using fullerenes.

  4. Computer-Aided Diagnosis of Breast Cancer: A Multi-Center Demonstrator

    National Research Council Canada - National Science Library

    Floyd, Carey

    2000-01-01

    .... The focus has been to gather data from multiple sites in order to verify and whether the artificial neural network computer aid to the diagnosis of breast cancer can be translated between locations...

  5. Computer Science Research at Langley

    Science.gov (United States)

    Voigt, S. J. (Editor)

    1982-01-01

    A workshop was held at Langley Research Center, November 2-5, 1981, to highlight ongoing computer science research at Langley and to identify additional areas of research based upon the computer user requirements. A panel discussion was held in each of nine application areas, and these are summarized in the proceedings. Slides presented by the invited speakers are also included. A survey of scientific, business, data reduction, and microprocessor computer users helped identify areas of focus for the workshop. Several areas of computer science which are of most concern to the Langley computer users were identified during the workshop discussions. These include graphics, distributed processing, programmer support systems and tools, database management, and numerical methods.

  6. Computational Pathology

    Science.gov (United States)

    Louis, David N.; Feldman, Michael; Carter, Alexis B.; Dighe, Anand S.; Pfeifer, John D.; Bry, Lynn; Almeida, Jonas S.; Saltz, Joel; Braun, Jonathan; Tomaszewski, John E.; Gilbertson, John R.; Sinard, John H.; Gerber, Georg K.; Galli, Stephen J.; Golden, Jeffrey A.; Becich, Michael J.

    2016-01-01

    Context We define the scope and needs within the new discipline of computational pathology, a discipline critical to the future of both the practice of pathology and, more broadly, medical practice in general. Objective To define the scope and needs of computational pathology. Data Sources A meeting was convened in Boston, Massachusetts, in July 2014 prior to the annual Association of Pathology Chairs meeting, and it was attended by a variety of pathologists, including individuals highly invested in pathology informatics as well as chairs of pathology departments. Conclusions The meeting made recommendations to promote computational pathology, including clearly defining the field and articulating its value propositions; asserting that the value propositions for health care systems must include means to incorporate robust computational approaches to implement data-driven methods that aid in guiding individual and population health care; leveraging computational pathology as a center for data interpretation in modern health care systems; stating that realizing the value proposition will require working with institutional administrations, other departments, and pathology colleagues; declaring that a robust pipeline should be fostered that trains and develops future computational pathologists, for those with both pathology and non-pathology backgrounds; and deciding that computational pathology should serve as a hub for data-related research in health care systems. The dissemination of these recommendations to pathology and bioinformatics departments should help facilitate the development of computational pathology. PMID:26098131

  7. Opportunities for Combined Heat and Power in Data Centers

    Energy Technology Data Exchange (ETDEWEB)

    Darrow, Ken [ICF International; Hedman, Bruce [ICF International

    2009-03-01

    Data centers represent a rapidly growing and very energy intensive activity in commercial, educational, and government facilities. In the last five years the growth of this sector was the electric power equivalent to seven new coal-fired power plants. Data centers consume 1.5% of the total power in the U.S. Growth over the next five to ten years is expected to require a similar increase in power generation. This energy consumption is concentrated in buildings that are 10-40 times more energy intensive than a typical office building. The sheer size of the market, the concentrated energy consumption per facility, and the tendency of facilities to cluster in 'high-tech' centers all contribute to a potential power infrastructure crisis for the industry. Meeting the energy needs of data centers is a moving target. Computing power is advancing rapidly, which reduces the energy requirements for data centers. A lot of work is going into improving the computing power of servers and other processing equipment. However, this increase in computing power is increasing the power densities of this equipment. While fewer pieces of equipment may be needed to meet a given data processing load, the energy density of a facility designed to house this higher efficiency equipment will be as high as or higher than it is today. In other words, while the data center of the future may have the IT power of ten data centers of today, it is also going to have higher power requirements and higher power densities. This report analyzes the opportunities for CHP technologies to assist primary power in making the data center more cost-effective and energy efficient. Broader application of CHP will lower the demand for electricity from central stations and reduce the pressure on electric transmission and distribution infrastructure. This report is organized into the following sections: (1) Data Center Market Segmentation--the description of the overall size of the market, the size and

  8. Trip attraction rates of shopping centers in Northern New Castle County, Delaware.

    Science.gov (United States)

    2004-07-01

    This report presents the trip attraction rates of the shopping centers in Northern New : Castle County in Delaware. The study aims to provide an alternative to ITE Trip : Generation Manual (1997) for computing the trip attraction of shopping centers ...

  9. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  10. Exploitation of heterogeneous resources for ATLAS Computing

    CERN Document Server

    Chudoba, Jiri; The ATLAS collaboration

    2018-01-01

    LHC experiments require significant computational resources for Monte Carlo simulations and real data processing and the ATLAS experiment is not an exception. In 2017, ATLAS exploited steadily almost 3M HS06 units, which corresponds to about 300 000 standard CPU cores. The total disk and tape capacity managed by the Rucio data management system exceeded 350 PB. Resources are provided mostly by Grid computing centers distributed in geographically separated locations and connected by the Grid middleware. The ATLAS collaboration developed several systems to manage computational jobs, data files and network transfers. ATLAS solutions for job and data management (PanDA and Rucio) were generalized and now are used also by other collaborations. More components are needed to include new resources such as private and public clouds, volunteers' desktop computers and primarily supercomputers in major HPC centers. Workflows and data flows significantly differ for these less traditional resources and extensive software re...

  11. Software Accelerates Computing Time for Complex Math

    Science.gov (United States)

    2014-01-01

    Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.

  12. National Nuclear Data Center status report

    International Nuclear Information System (INIS)

    2002-01-01

    This paper is the status report of the US National Nuclear Data Center, Brookhaven. It describes the new NDS approach to customer services, which is based on users initiating wish lists on topics of interest with the possibility to receive reports in hardcopy or electronically forms. After completion within the next two years of the multi platform software for management and data retrievals from shared databases, users will have the opportunity to install directly their own local nuclear data center for desktop applications. The paper describes the computer facilities, the nuclear reaction data structure, the database migration and the customer services. (a.n.)

  13. Scientific activities 1980 Nuclear Research Center ''Democritos''

    International Nuclear Information System (INIS)

    1982-01-01

    The scientific activities and achievements of the Nuclear Research Center Democritos for the year 1980 are presented in the form of a list of 76 projects giving title, objectives, responsible of each project, developed activities and the pertaining lists of publications. The 16 chapters of this work cover the activities of the main Divisions of the Democritos NRC: Electronics, Biology, Physics, Chemistry, Health Physics, Reactor, Scientific Directorate, Radioisotopes, Environmental Radioactivity, Soil Science, Computer Center, Uranium Exploration, Medical Service, Technological Applications, Radioimmunoassay and Training. (N.C.)

  14. Framework Resources Multiply Computing Power

    Science.gov (United States)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  15. SciDAC visualization and analytics center for enabling technology

    International Nuclear Information System (INIS)

    Bethel, E Wes; Johnson, Chris; Joy, Ken; Ahern, Sean; Pascucci, Valerio; Childs, Hank; Cohen, Jonathan; Duchaineau, Mark; Hamann, Bernd; Hansen, Charles; Laney, Dan; Lindstrom, Peter; Meredith, Jeremy; Ostrouchov, George; Parker, Steven; Silva, Claudio; Sanderson, Allen; Tricoche, Xavier

    2007-01-01

    The Visualization and Analytics Center for Enabling Technologies (VACET) focuses on leveraging scientific visualization and analytics software technology as an enabling technology for increasing scientific productivity and insight. Advances in computational technology have resulted in an 'information big bang,' which in turn has created a significant data understanding challenge. This challenge is widely acknowledged to be one of the primary bottlenecks in contemporary science. The vision of VACET is to adapt, extend, create when necessary, and deploy visual data analysis solutions that are responsive to the needs of DOE's computational and experimental scientists. Our center is engineered to be directly responsive to those needs and to deliver solutions for use in DOE's large open computing facilities. The research and development directly target data understanding problems provided by our scientific application stakeholders. VACET draws from a diverse set of visualization technology ranging from production quality applications and application frameworks to state-of-the-art algorithms for visualization, analysis, analytics, data manipulation, and data management

  16. Research Institute for Advanced Computer Science

    Science.gov (United States)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2000-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center. It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. Ames has been designated NASA's Center of Excellence in Information Technology. In this capacity, Ames is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA Ames and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth; (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking. Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a

  17. Center for Computer Security newsletter. Volume 2, Number 3

    Energy Technology Data Exchange (ETDEWEB)

    None

    1983-05-01

    The Fifth Computer Security Group Conference was held November 16 to 18, 1982, at the Knoxville Hilton in Knoxville, Tennessee. Attending were 183 people, representing the Department of Energy, DOE contractors, other government agencies, and vendor organizations. In these papers are abridgements of most of the papers presented in Knoxville. Less than half-a-dozen speakers failed to furnish either abstracts or full-text papers of their Knoxville presentations.

  18. Survey of Storage and Fault Tolerance Strategies Used in Cloud Computing

    Science.gov (United States)

    Ericson, Kathleen; Pallickara, Shrideep

    Cloud computing has gained significant traction in recent years. Companies such as Google, Amazon and Microsoft have been building massive data centers over the past few years. Spanning geographic and administrative domains, these data centers tend to be built out of commodity desktops with the total number of computers managed by these companies being in the order of millions. Additionally, the use of virtualization allows a physical node to be presented as a set of virtual nodes resulting in a seemingly inexhaustible set of computational resources. By leveraging economies of scale, these data centers can provision cpu, networking, and storage at substantially reduced prices which in turn underpins the move by many institutions to host their services in the cloud.

  19. Quantum computing with defects.

    Science.gov (United States)

    Weber, J R; Koehl, W F; Varley, J B; Janotti, A; Buckley, B B; Van de Walle, C G; Awschalom, D D

    2010-05-11

    Identifying and designing physical systems for use as qubits, the basic units of quantum information, are critical steps in the development of a quantum computer. Among the possibilities in the solid state, a defect in diamond known as the nitrogen-vacancy (NV(-1)) center stands out for its robustness--its quantum state can be initialized, manipulated, and measured with high fidelity at room temperature. Here we describe how to systematically identify other deep center defects with similar quantum-mechanical properties. We present a list of physical criteria that these centers and their hosts should meet and explain how these requirements can be used in conjunction with electronic structure theory to intelligently sort through candidate defect systems. To illustrate these points in detail, we compare electronic structure calculations of the NV(-1) center in diamond with those of several deep centers in 4H silicon carbide (SiC). We then discuss the proposed criteria for similar defects in other tetrahedrally coordinated semiconductors.

  20. 78 FR 73195 - Privacy Act of 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching...

    Science.gov (United States)

    2013-12-05

    ... 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching Program Match No. 1312 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS... Privacy Act of 1974 (5 U.S.C. 552a), as amended, this notice announces the renewal of a CMP that CMS plans...

  1. High Performance Computing and Storage Requirements for Biological and Environmental Research Target 2017

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Wasserman, Harvey [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)

    2013-05-01

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In addition to large-­scale computing and storage resources NERSC provides support and expertise that help scientists make efficient use of its systems. The latest review revealed several key requirements, in addition to achieving its goal of characterizing BER computing and storage needs.

  2. Analysis on the security of cloud computing

    Science.gov (United States)

    He, Zhonglin; He, Yuhua

    2011-02-01

    Cloud computing is a new technology, which is the fusion of computer technology and Internet development. It will lead the revolution of IT and information field. However, in cloud computing data and application software is stored at large data centers, and the management of data and service is not completely trustable, resulting in safety problems, which is the difficult point to improve the quality of cloud service. This paper briefly introduces the concept of cloud computing. Considering the characteristics of cloud computing, it constructs the security architecture of cloud computing. At the same time, with an eye toward the security threats cloud computing faces, several corresponding strategies are provided from the aspect of cloud computing users and service providers.

  3. Shredder: GPU-Accelerated Incremental Storage and Computation

    OpenAIRE

    Bhatotia, Pramod; Rodrigues, Rodrigo; Verma, Akshat

    2012-01-01

    Redundancy elimination using data deduplication and incremental data processing has emerged as an important technique to minimize storage and computation requirements in data center computing. In this paper, we present the design, implementation and evaluation of Shredder, a high performance content-based chunking framework for supporting incremental storage and computation systems. Shredder exploits the massively parallel processing power of GPUs to overcome the CPU bottlenecks of content-ba...

  4. 78 FR 30318 - Center for Scientific Review; Notice of Closed Meetings

    Science.gov (United States)

    2013-05-22

    ... Computational Mass-Spectrometry. Date: June 19-21, 2013. Time: 7:00 p.m. to 1:00 p.m. Agenda: To review and... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Center for Scientific Review... personal privacy. Name of Committee: Center for Scientific Review Special Emphasis Panel; Member Conflict...

  5. Efficient workload management in geographically distributed data centers leveraging autoregressive models

    Science.gov (United States)

    Altomare, Albino; Cesario, Eugenio; Mastroianni, Carlo

    2016-10-01

    The opportunity of using Cloud resources on a pay-as-you-go basis and the availability of powerful data centers and high bandwidth connections are speeding up the success and popularity of Cloud systems, which is making on-demand computing a common practice for enterprises and scientific communities. The reasons for this success include natural business distribution, the need for high availability and disaster tolerance, the sheer size of their computational infrastructure, and/or the desire to provide uniform access times to the infrastructure from widely distributed client sites. Nevertheless, the expansion of large data centers is resulting in a huge rise of electrical power consumed by hardware facilities and cooling systems. The geographical distribution of data centers is becoming an opportunity: the variability of electricity prices, environmental conditions and client requests, both from site to site and with time, makes it possible to intelligently and dynamically (re)distribute the computational workload and achieve as diverse business goals as: the reduction of costs, energy consumption and carbon emissions, the satisfaction of performance constraints, the adherence to Service Level Agreement established with users, etc. This paper proposes an approach that helps to achieve the business goals established by the data center administrators. The workload distribution is driven by a fitness function, evaluated for each data center, which weighs some key parameters related to business objectives, among which, the price of electricity, the carbon emission rate, the balance of load among the data centers etc. For example, the energy costs can be reduced by using a "follow the moon" approach, e.g. by migrating the workload to data centers where the price of electricity is lower at that time. Our approach uses data about historical usage of the data centers and data about environmental conditions to predict, with the help of regressive models, the values of the

  6. Handbook for the Computer Security Certification of Trusted Systems

    National Research Council Canada - National Science Library

    Weissman, Clark

    1995-01-01

    Penetration testing is required for National Computer Security Center (NCSC) security evaluations of systems and products for the B2, B3, and A1 class ratings of the Trusted Computer System Evaluation Criteria (TCSEC...

  7. Energy efficient data centers

    Energy Technology Data Exchange (ETDEWEB)

    Tschudi, William; Xu, Tengfang; Sartor, Dale; Koomey, Jon; Nordman, Bruce; Sezgen, Osman

    2004-03-30

    through extensive participation with data center professionals, examination of case study findings, and participation in data center industry meetings and workshops. Industry partners enthusiastically provided valuable insight into current practice, and helped to identify areas where additional public interest research could lead to significant efficiency improvement. This helped to define and prioritize the research agenda. The interaction involved industry representatives with expertise in all aspects of data center facilities, including specialized facility infrastructure systems and computing equipment. In addition to the input obtained through industry workshops, LBNL's participation in a three-day, comprehensive design ''charrette'' hosted by the Rocky Mountain Institute (RMI) yielded a number of innovative ideas for future research.

  8. Development of an instrument to measure health center (HC) personnel's computer use, knowledge and functionality demand for HC computerized information system in Thailand.

    Science.gov (United States)

    Kijsanayotin, Boonchai; Pannarunothai, Supasit; Speedie, Stuart

    2005-01-01

    Knowledge about socio-technical aspects of information technology (IT) is vital for the success of health IT projects. The Thailand health administration anticipates using health IT to support the recently implemented national universal health care system. However, the national knowledge associate with the socio-technical aspects of health IT has not been studied in Thailand. A survey instrument measuring Thai health center (HC) personnel's computer use, basic IT knowledge and HC computerized information system functionality needs was developed. The instrument reveals acceptable test-retest reliability and reasonable internal consistency of the measures. The future nation-wide demonstration study will benefit from this study.

  9. SCELib3.0: The new revision of SCELib, the parallel computational library of molecular properties in the Single Center Approach

    Science.gov (United States)

    Sanna, N.; Baccarelli, I.; Morelli, G.

    2009-12-01

    SCELib is a computer program which implements the Single Center Expansion (SCE) method to describe molecular electronic densities and the interaction potentials between a charged projectile (electron or positron) and a target molecular system. The first version (CPC Catalog identifier ADMG_v1_0) was submitted to the CPC Program Library in 2000, and version 2.0 (ADMG_v2_0) was submitted in 2004. We here announce the new release 3.0 which presents additional features with respect to the previous versions aiming at a significative enhance of its capabilities to deal with larger molecular systems. SCELib 3.0 allows for ab initio effective core potential (ECP) calculations of the molecular wavefunctions to be used in the SCE method in addition to the standard all-electron description of the molecule. The list of supported architectures has been updated and the code has been ported to platforms based on accelerating coprocessors, such as the NVIDIA GPGPU and the new parallel model adopted is able to efficiently run on a mixed many-core computing system. Program summaryProgram title: SCELib3.0 Catalogue identifier: ADMG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADMG_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2 018 862 No. of bytes in distributed program, including test data, etc.: 4 955 014 Distribution format: tar.gz Programming language: C Compilers used: xlc V8.x, Intel C V10.x, Portland Group V7.x, nvcc V2.x Computer: All SMP platforms based on AIX, Linux and SUNOS operating systems over SPARC, POWER, Intel Itanium2, X86, em64t and Opteron processors Operating system: SUNOS, IBM AIX, Linux RedHat (Enterprise), Linux SuSE (SLES) Has the code been vectorized or parallelized?: Yes. 1 to 32 (CPU or GPU) used RAM: Up to 32 GB depending on the molecular

  10. Noise-Resilient Quantum Computing with a Nitrogen-Vacancy Center and Nuclear Spins.

    Science.gov (United States)

    Casanova, J; Wang, Z-Y; Plenio, M B

    2016-09-23

    Selective control of qubits in a quantum register for the purposes of quantum information processing represents a critical challenge for dense spin ensembles in solid-state systems. Here we present a protocol that achieves a complete set of selective electron-nuclear gates and single nuclear rotations in such an ensemble in diamond facilitated by a nearby nitrogen-vacancy (NV) center. The protocol suppresses internuclear interactions as well as unwanted coupling between the NV center and other spins of the ensemble to achieve quantum gate fidelities well exceeding 99%. Notably, our method can be applied to weakly coupled, distant spins representing a scalable procedure that exploits the exceptional properties of nuclear spins in diamond as robust quantum memories.

  11. Efficient coherent driving of NV centers in a YIG-nanodiamond hybrid platform

    Science.gov (United States)

    Andrich, Paolo; de Las Casas, Charles F.; Liu, Xiaoying; Bretscher, Hope L.; Nealey, Paul F.; Awschalom, David D.; Heremans, F. Joseph

    The nitrogen-vacancy (NV) center in diamond is an ideal candidate for room temperature quantum computing and sensing applications. These schemes rely on magnetic dipolar interactions between the NV centers and other paramagnetic centers, imposing a stringent limit on the spin-to-spin separation. For instance, creating multi-qubit entanglement requires two NV centers to be within a few nanometers of each other, limiting the possibility for individual optical and microwave (MW) control. Moreover, to sense spins external to the diamond lattice the NV centers need to be within few nanometers from the surface, where their coherence properties are strongly reduced. In this work, we address these limitations using a hybrid YIG-nanodiamond platform where propagating spin-waves (SWs) are used to mediate the interaction between a MW source and a NV center ensemble, thereby relaxing the requirements imposed by dipolar interactions. In particular, we show that SWs can be used to amplify a MW signal detected by the NV centers by more than two orders of magnitude, allowing us to obtain ultra-low energy SW-driven coherent control of the NV centers. These results demonstrate the potentials of YIG-ND hybrid systems for the realization of enhanced quantum sensing and scalable computing devices. This work is supported by the ARO MURI program and the AFOSR.

  12. Center for Programming Models for Scalable Parallel Computing - Towards Enhancing OpenMP for Manycore and Heterogeneous Nodes

    Energy Technology Data Exchange (ETDEWEB)

    Barbara Chapman

    2012-02-01

    OpenMP was not well recognized at the beginning of the project, around year 2003, because of its limited use in DoE production applications and the inmature hardware support for an efficient implementation. Yet in the recent years, it has been graduately adopted both in HPC applications, mostly in the form of MPI+OpenMP hybrid code, and in mid-scale desktop applications for scientific and experimental studies. We have observed this trend and worked deligiently to improve our OpenMP compiler and runtimes, as well as to work with the OpenMP standard organization to make sure OpenMP are evolved in the direction close to DoE missions. In the Center for Programming Models for Scalable Parallel Computing project, the HPCTools team at the University of Houston (UH), directed by Dr. Barbara Chapman, has been working with project partners, external collaborators and hardware vendors to increase the scalability and applicability of OpenMP for multi-core (and future manycore) platforms and for distributed memory systems by exploring different programming models, language extensions, compiler optimizations, as well as runtime library support.

  13. A model for calculating the optimal replacement interval of computer systems

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1981-08-01

    A mathematical model for calculating the optimal replacement interval of computer systems is described. This model is made to estimate the best economical interval of computer replacement when computing demand, cost and performance of computer, etc. are known. The computing demand is assumed to monotonously increase every year. Four kinds of models are described. In the model 1, a computer system is represented by only a central processing unit (CPU) and all the computing demand is to be processed on the present computer until the next replacement. On the other hand in the model 2, the excessive demand is admitted and may be transferred to other computing center and processed costly there. In the model 3, the computer system is represented by a CPU, memories (MEM) and input/output devices (I/O) and it must process all the demand. Model 4 is same as model 3, but the excessive demand is admitted to be processed in other center. (1) Computing demand at the JAERI, (2) conformity of Grosch's law for the recent computers, (3) replacement cost of computer systems, etc. are also described. (author)

  14. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  15. Research Centers & Consortia | College of Engineering & Applied Science

    Science.gov (United States)

    Academics Admission Student Life Research Schools & Colleges Libraries Athletics Centers & ; Applied Science Powerful Ideas. Proven Results. Search for: Go This site All UWM Search Site Menu Skip to content Academics Undergraduate Programs Majors Minors Integrated Bachelor/Master Degree Applied Computing

  16. Rapid guiding center calculations

    International Nuclear Information System (INIS)

    White, R.B.

    1995-04-01

    Premature loss of high energy particles, and in particular fusion alpha particles, is very deleterious in a fusion reactor. Because of this it is necessary to make long-time simulations, on the order of the alpha particle slowing down time, with a number of test particles sufficient to give predictions with reasonable statistical accuracy. Furthermore it is desirable to do this for a large number of equilibria with different characteristic magnetic field ripple, to best optimize engineering designs. In addition, modification of the particle distribution due to magnetohydrodynamic (MHD) modes such as the saw tooth mode present in the plasma can be important, and this effect requires additional simulation. Thus the large number of necessary simulations means any increase of computing speed in guiding center codes is an important improvement in predictive capability. Previous guiding center codes using numerical equilibria such as ORBIT evaluated the local field strength and ripple magnitude using Lagrangian interpolation on a grid. Evaluation of these quantities four times per time step (using a fourth order Runge-Kutta routine) constitutes the major computational effort of the code. In the present work the authors represent the field quantities through an expansion in terms of pseudo-cartesian coordinates formed from the magnetic coordinates. The simplicity of the representation gives four important advantages over previous methods

  17. Computational Toxicology as Implemented by the US EPA ...

    Science.gov (United States)

    Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environmental Protection Agency EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in air, water, and hazardous-waste sites. The Office of Research and Development (ORD) Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the U.S. EPA Science to Achieve Results (STAR) program. Together these elements form the key components in the implementation of both the initial strategy, A Framework for a Computational Toxicology Research Program (U.S. EPA, 2003), and the newly released The U.S. Environmental Protection Agency's Strategic Plan for Evaluating the T

  18. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility.

    Science.gov (United States)

    Jaschob, Daniel; Riffle, Michael

    2012-07-30

    Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  19. Exploiting the Potential of Data Centers in the Smart Grid

    Science.gov (United States)

    Wang, Xiaoying; Zhang, Yu-An; Liu, Xiaojing; Cao, Tengfei

    As the number of cloud computing data centers grows rapidly in recent years, from the perspective of smart grid, they are really large and noticeable electric load. In this paper, we focus on the important role and the potential of data centers as controllable loads in the smart grid. We reviewed relevant research in the area of letting data centers participate in the ancillary services market and demand response programs of the grid, and further investigate the possibility of exploiting the impact of data center placement on the grid. Various opportunities and challenges are summarized, which could provide more chances for researches to explore this field.

  20. Inleiding: 'History of computing'. Geschiedschrijving over computers en computergebruik in Nederland

    Directory of Open Access Journals (Sweden)

    Adrienne van den Boogaard

    2008-06-01

    Full Text Available Along with the international trends in history of computing, Dutch contributions over the past twenty years moved away from a focus on machinery to the broader scope of use of computers, appropriation of computing technologies in various traditions, labour relations and professionalisation issues, and, lately, software.It is only natural that an emerging field like computer science sets out to write its genealogy and canonise the important steps in its intellectual endeavour. It is fair to say that a historiography diverging from such “home” interest, started in 1987 with the work of Eda Kranakis – then active in The Netherlands – commissioned by the national bureau for technology assessment, and Gerard Alberts, turning a commemorative volume of the Mathematical Center into a history of the same institute. History of computing in The Netherlands made a major leap in the spring of 1994 when Dirk de Wit, Jan van den Ende and Ellen van Oost defended their dissertations, on the roads towards adoption of computing technology in banking, in science and engineering, and on the gender aspect in computing. Here, history of computing had already moved from machines to the use of computers. The three authors joined Gerard Alberts and Onno de Wit in preparing a volume on the rise of IT in The Netherlands, the sequel of which in now in preparation in a team lead by Adrienne van den Bogaard.Dutch research reflected the international attention for professionalisation issues (Ensmenger, Haigh very early on in the dissertation by Ruud van Dael, Something to do with computers (2001 revealing how occupations dealing with computers typically escape the pattern of closure by professionalisation as expected by the, thus outdated, sociology of professions. History of computing not only takes use and users into consideration, but finally, as one may say, confronts the technological side of putting the machine to use, software, head on. The groundbreaking works

  1. Sensitive Data Protection Based on Intrusion Tolerance in Cloud Computing

    OpenAIRE

    Jingyu Wang; xuefeng Zheng; Dengliang Luo

    2011-01-01

    Service integration and supply on-demand coming from cloud computing can significantly improve the utilization of computing resources and reduce power consumption of per service, and effectively avoid the error of computing resources. However, cloud computing is still facing the problem of intrusion tolerance of the cloud computing platform and sensitive data of new enterprise data center. In order to address the problem of intrusion tolerance of cloud computing platform and sensitive data in...

  2. Active Computer Network Defense: An Assessment

    Science.gov (United States)

    2001-04-01

    sufficient base of knowledge in information technology can be assumed to be working on some form of computer network warfare, even if only defensive in...the Defense Information Infrastructure (DII) to attack. Transmission Control Protocol/ Internet Protocol (TCP/IP) networks are inherently resistant to...aims to create this part of information superiority, and computer network defense is one of its fundamental components. Most of these efforts center

  3. Center for Technology for Advanced Scientific Component Software (TASCS)

    Energy Technology Data Exchange (ETDEWEB)

    Damevski, Kostadin [Virginia State Univ., Petersburg, VA (United States)

    2009-03-30

    A resounding success of the Scientific Discover through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedened computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative hig-performance scientific computing.

  4. · Attitude towards Computers and Classroom Management of Language School Teachers

    Directory of Open Access Journals (Sweden)

    Sara Jalali

    2014-07-01

    Full Text Available Computer-assisted language learning (CALL is the realization of computers in schools and universities which has potentially enhanced the language learning experience inside the classrooms. The integration of the technologies into the classroom demands that the teachers adopt a number of classroom management procedures to maintain a more learner-centered and conducive language learning environment. The current study explored the relationship between computer attitudes and behavior and instructional classroom management approaches implemented by English institute teachers. In so doing, a total of 105 male (n = 27 and female (n = 78 EFL teachers participated in this study. A computer attitude questionnaire adapted from Albirini (2006 and a Behavior and Instructional Management Scale (BIMS adopted from Martin and Sass (2010 were benefitted from for the purpose of collecting the data. The results of the Pearson Correlation Coefficient revealed that there were no significant relationships between attitude and behavior and instructional management across gender. However, it was found that the more male teachers experience tendency toward using computers in their classes, the more teacher-centered their classes become. In addition, the more female teachers are prone to use computers in their classes, the more student-centered and lenient their classes become.

  5. Multi-Center Electronic Structure Calculations for Plasma Equation of State

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, B G; Johnson, D D; Alam, A

    2010-12-14

    We report on an approach for computing electronic structure utilizing solid-state multi-center scattering techniques, but generalized to finite temperatures to model plasmas. This approach has the advantage of handling mixtures at a fundamental level without the imposition of ad hoc continuum lowering models, and incorporates bonding and charge exchange, as well as multi-center effects in the calculation of the continuum density of states.

  6. SCELib2: the new revision of SCELib, the parallel computational library of molecular properties in the single center approach

    Science.gov (United States)

    Sanna, N.; Morelli, G.

    2004-09-01

    In this paper we present the new version of the SCELib program (CPC Catalogue identifier ADMG) a full numerical implementation of the Single Center Expansion (SCE) method. The physics involved is that of producing the SCE description of molecular electronic densities, of molecular electrostatic potentials and of molecular perturbed potentials due to a point negative or positive charge. This new revision of the program has been optimized to run in serial as well as in parallel execution mode, to support a larger set of molecular symmetries and to permit the restart of long-lasting calculations. To measure the performance of this new release, a comparative study has been carried out on the most powerful computing architectures in serial and parallel runs. The results of the calculations reported in this paper refer to real cases medium to large molecular systems and they are reported in full details to benchmark at best the parallel architectures the new SCELib code will run on. Program summaryTitle of program: SCELib2 Catalogue identifier: ADGU Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADGU Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Reference to previous versions: Comput. Phys. Commun. 128 (2) (2000) 139 (CPC catalogue identifier: ADMG) Does the new version supersede the original program?: Yes Computer for which the program is designed and others on which it has been tested: HP ES45 and rx2600, SUN ES4500, IBM SP and any single CPU workstation based on Alpha, SPARC, POWER, Itanium2 and X86 processors Installations: CASPUR, local Operating systems under which the program has been tested: HP Tru64 V5.X, SUNOS V5.8, IBM AIX V5.X, Linux RedHat V8.0 Programming language used: C Memory required to execute with typical data: 10 Mwords. Up to 2000 Mwords depending on the molecular system and runtime parameters No. of bits in a word: 64 No. of processors used: 1 to 32 Has the code been vectorized or parallelized?: Yes

  7. Plasma Science and Innovation Center (PSI-Center) at Washington, Wisconsin, and Utah State, ARRA Supplement

    Energy Technology Data Exchange (ETDEWEB)

    Sovinec, Carl [Univ. of Wisconsin-Madison, Madison, WI (United States)

    2018-03-14

    The objective of the Plasma Science and Innovation Center (PSI-Center) is to develop and deploy computational models that simulate conditions in smaller, concept-exploration plasma experiments. The PSIC group at the University of Wisconsin-Madison, led by Prof. Carl Sovinec, uses and enhances the Non-Ideal Magnetohydrodynamics with Rotation, Open Discussion (NIMROD) code, to simulate macroscopic plasma dynamics in a number of magnetic confinement configurations. These numerical simulations provide information on how magnetic fields and plasma flows evolve over all three spatial dimensions, which supplements the limited access of diagnostics in plasma experiments. The information gained from simulation helps explain how plasma evolves. It is also used to engineer more effective plasma confinement systems, reducing the need for building many experiments to cover the physical parameter space. The ultimate benefit is a more cost-effective approach to the development of fusion energy for peaceful power production. The supplemental funds provided by the American Recovery and Reinvestment Act of 2009 were used to purchase computer components that were assembled into a 48-core system with 256 Gb of shared memory. The system was engineered and constructed by the group's system administrator at the time, Anthony Hammond. It was successfully used by then graduate student, Dr. John O'Bryan, for computing magnetic relaxation dynamics that occur during experimental tests of non-inductive startup in the Pegasus Toroidal Experiment (pegasus.ep.wisc.edu). Dr. O'Bryan's simulations provided the first detailed explanation of how the driven helical filament of electrical current evolves into a toroidal tokamak-like plasma configuration.

  8. Cloud computing can simplify HIT infrastructure management.

    Science.gov (United States)

    Glaser, John

    2011-08-01

    Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.

  9. Developmental Stages in School Computer Use: Neither Marx Nor Piaget.

    Science.gov (United States)

    Lengel, James G.

    Karl Marx's theory of stages can be applied to computer use in the schools. The first stage, the P Stage, comprises the entry of the computer into the school. Computer use at this stage is personal and tends to center around one personality. Social studies teachers are seldom among this select few. The second stage of computer use, the D Stage, is…

  10. Introduction to computer networking

    CERN Document Server

    Robertazzi, Thomas G

    2017-01-01

    This book gives a broad look at both fundamental networking technology and new areas that support it and use it. It is a concise introduction to the most prominent, recent technological topics in computer networking. Topics include network technology such as wired and wireless networks, enabling technologies such as data centers, software defined networking, cloud and grid computing and applications such as networks on chips, space networking and network security. The accessible writing style and non-mathematical treatment makes this a useful book for the student, network and communications engineer, computer scientist and IT professional. • Features a concise, accessible treatment of computer networking, focusing on new technological topics; • Provides non-mathematical introduction to networks in their most common forms today;< • Includes new developments in switching, optical networks, WiFi, Bluetooth, LTE, 5G, and quantum cryptography.

  11. About Security Solutions in Fog Computing

    Directory of Open Access Journals (Sweden)

    Eugen Petac

    2016-01-01

    Full Text Available The key for improving a system's performance, its security and reliability is to have the dataprocessed locally in remote data centers. Fog computing extends cloud computing through itsservices to devices and users at the edge of the network. Through this paper it is explored the fogcomputing environment. Security issues in this area are also described. Fog computing providesthe improved quality of services to the user by complementing shortages of cloud in IoT (Internet ofThings environment. Our proposal, named Adaptive Fog Computing Node Security Profile(AFCNSP, which is based security Linux solutions, will get an improved security of fog node withrich feature sets.

  12. Readiness of healthcare providers for eHealth: the case from primary healthcare centers in Lebanon.

    Science.gov (United States)

    Saleh, Shadi; Khodor, Rawya; Alameddine, Mohamad; Baroud, Maysa

    2016-11-10

    eHealth can positively impact the efficiency and quality of healthcare services. Its potential benefits extend to the patient, healthcare provider, and organization. Primary healthcare (PHC) settings may particularly benefit from eHealth. In these settings, healthcare provider readiness is key to successful eHealth implementation. Accordingly, it is necessary to explore the potential readiness of providers to use eHealth tools. Therefore, the purpose of this study was to assess the readiness of healthcare providers working in PHC centers in Lebanon to use eHealth tools. A self-administered questionnaire was used to assess participants' socio-demographics, computer use, literacy, and access, and participants' readiness for eHealth implementation (appropriateness, management support, change efficacy, personal beneficence). The study included primary healthcare providers (physicians, nurses, other providers) working in 22 PHC centers distributed across Lebanon. Descriptive and bivariate analyses (ANOVA, independent t-test, Kruskal Wallis, Tamhane's T2) were used to compare participant characteristics to the level of readiness for the implementation of eHealth. Of the 541 questionnaires, 213 were completed (response rate: 39.4 %). The majority of participants were physicians (46.9 %), and nurses (26.8 %). Most physicians (54.0 %), nurses (61.4 %), and other providers (50.9 %) felt comfortable using computers, and had access to computers at their PHC center (physicians: 77.0 %, nurses: 87.7 %, others: 92.5 %). Frequency of computer use varied. The study found a significant difference for personal beneficence, management support, and change efficacy among different healthcare providers, and relative to participants' level of comfort using computers. There was a significant difference by level of comfort using computers and appropriateness. A significant difference was also found between those with access to computers in relation to personal beneficence and

  13. Communications among data and science centers

    Science.gov (United States)

    Green, James L.

    1990-01-01

    The ability to electronically access and query the contents of remote computer archives is of singular importance in space and earth sciences; the present evaluation of such on-line information networks' development status foresees swift expansion of their data capabilities and complexity, in view of the volumes of data that will continue to be generated by NASA missions. The U.S.'s National Space Science Data Center (NSSDC) manages NASA's largest science computer network, the Space Physics Analysis Network; a comprehensive account is given of the structure of NSSDC international access through BITNET, and of connections to the NSSDC available in the Americas via the International X.25 network.

  14. Scientific Computing Strategic Plan for the Idaho National Laboratory

    International Nuclear Information System (INIS)

    Whiting, Eric Todd

    2015-01-01

    Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory's (INL's) challenge and charge, and is central to INL's ongoing success. Computing is an essential part of INL's future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing number of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.

  15. Validation of Shielding Analysis Capability of SuperMC with SINBAD

    Directory of Open Access Journals (Sweden)

    Chen Chaobin

    2017-01-01

    Full Text Available Abstract: The shielding analysis capability of SuperMC was validated with the Shielding Integral Benchmark Archive Database (SINBAD. The SINBAD was compiled by RSICC and NEA, it includes numerous benchmark experiments performed with the D-T fusion neutron source facilities of OKTAVIAN, FNS, IPPE, etc. The results from SuperMC simulation were compared with experimental data and MCNP results. Very good agreement with deviation lower than 1% was achieved and it suggests that SuperMC is reliable in shielding calculation.

  16. Collection of reports on use of computation fund utilized in common in 1988

    International Nuclear Information System (INIS)

    1989-05-01

    Nuclear Physics Research Center, Osaka University, has provided the computation fund utilized in common since 1976 for supporting the computation related to the activities of the Center. When this computation fund is used, after finishing the use, the simple report of definite form (printed in RCNP-Z together with the report of the committee on computation fund utilized in common) and the detailed report concerning the contents of computation are to be presented. In the latter report, English abstract, explanation of the results obtained by computation and physical contents, new development, difficult point and the method of its solution in computation techniques, subroutine and function used for computation and their functions and block diagrams and so on are included. This book is the collection of the latter reports on the use of the computation fund utilized in common in fiscal year 1988. The invitation to the computation fund utilized in common is informed in December every year in RCNP-Z. (K.I.)

  17. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160; The ATLAS collaboration

    2016-01-01

    Fifteen Chinese High Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  18. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160

    2017-01-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  19. ACToR - Aggregated Computational Toxicology Resource

    International Nuclear Information System (INIS)

    Judson, Richard; Richard, Ann; Dix, David; Houck, Keith; Elloumi, Fathi; Martin, Matthew; Cathey, Tommy; Transue, Thomas R.; Spencer, Richard; Wolf, Maritja

    2008-01-01

    ACToR (Aggregated Computational Toxicology Resource) is a database and set of software applications that bring into one central location many types and sources of data on environmental chemicals. Currently, the ACToR chemical database contains information on chemical structure, in vitro bioassays and in vivo toxicology assays derived from more than 150 sources including the U.S. Environmental Protection Agency (EPA), Centers for Disease Control (CDC), U.S. Food and Drug Administration (FDA), National Institutes of Health (NIH), state agencies, corresponding government agencies in Canada, Europe and Japan, universities, the World Health Organization (WHO) and non-governmental organizations (NGOs). At the EPA National Center for Computational Toxicology, ACToR helps manage large data sets being used in a high-throughput environmental chemical screening and prioritization program called ToxCast TM

  20. CICART Center For Integrated Computation And Analysis Of Reconnection And Turbulence

    International Nuclear Information System (INIS)

    Bhattacharjee, Amitava

    2016-01-01

    CICART is a partnership between the University of New Hampshire (UNH) and Dartmouth College. CICART addresses two important science needs of the DoE: the basic understanding of magnetic reconnection and turbulence that strongly impacts the performance of fusion plasmas, and the development of new mathematical and computational tools that enable the modeling and control of these phenomena. The principal participants of CICART constitute an interdisciplinary group, drawn from the communities of applied mathematics, astrophysics, computational physics, fluid dynamics, and fusion physics. It is a main premise of CICART that fundamental aspects of magnetic reconnection and turbulence in fusion devices, smaller-scale laboratory experiments, and space and astrophysical plasmas can be viewed from a common perspective, and that progress in understanding in any of these interconnected fields is likely to lead to progress in others. The establishment of CICART has strongly impacted the education and research mission of a new Program in Integrated Applied Mathematics in the College of Engineering and Applied Sciences at UNH by enabling the recruitment of a tenure-track faculty member, supported equally by UNH and CICART, and the establishment of an IBM-UNH Computing Alliance. The proposed areas of research in magnetic reconnection and turbulence in astrophysical, space, and laboratory plasmas include the following topics: (A) Reconnection and secondary instabilities in large high-Lundquist-number plasmas, (B) Particle acceleration in the presence of multiple magnetic islands, (C) Gyrokinetic reconnection: comparison with fluid and particle-in-cell models, (D) Imbalanced turbulence, (E) Ion heating, and (F) Turbulence in laboratory (including fusion-relevant) experiments. These theoretical studies make active use of three high-performance computer simulation codes: (1) The Magnetic Reconnection Code, based on extended two-fluid (or Hall MHD) equations, in an Adaptive Mesh

  1. The VINEYARD project: Versatile Integrated Accelerator-based Heterogeneous Data Centers

    OpenAIRE

    Kachris, Christoforos; Soudris, Dimitrios; Gaydadjiev, Georgi; Nguyen, Huy-Nam

    2016-01-01

    Emerging applications like cloud computing and big data analytics have created the need for powerful centers hosting hundreds of thousands of servers. Currently, the data centers are based on general purpose processors that provide high flexibility but lacks the energy efficiency of customized accelerators. VINEYARD1 aims to develop novel servers based on programmable hardware accelerators. Furthermore, VINEYARD will develop an integrated framework for allowing end-users to seamlessly utilize...

  2. Energy-Aware Computation Offloading of IoT Sensors in Cloudlet-Based Mobile Edge Computing.

    Science.gov (United States)

    Ma, Xiao; Lin, Chuang; Zhang, Han; Liu, Jianwei

    2018-06-15

    Mobile edge computing is proposed as a promising computing paradigm to relieve the excessive burden of data centers and mobile networks, which is induced by the rapid growth of Internet of Things (IoT). This work introduces the cloud-assisted multi-cloudlet framework to provision scalable services in cloudlet-based mobile edge computing. Due to the constrained computation resources of cloudlets and limited communication resources of wireless access points (APs), IoT sensors with identical computation offloading decisions interact with each other. To optimize the processing delay and energy consumption of computation tasks, theoretic analysis of the computation offloading decision problem of IoT sensors is presented in this paper. In more detail, the computation offloading decision problem of IoT sensors is formulated as a computation offloading game and the condition of Nash equilibrium is derived by introducing the tool of a potential game. By exploiting the finite improvement property of the game, the Computation Offloading Decision (COD) algorithm is designed to provide decentralized computation offloading strategies for IoT sensors. Simulation results demonstrate that the COD algorithm can significantly reduce the system cost compared with the random-selection algorithm and the cloud-first algorithm. Furthermore, the COD algorithm can scale well with increasing IoT sensors.

  3. Human-Centered Design of Human-Computer-Human Dialogs in Aerospace Systems

    Science.gov (United States)

    Mitchell, Christine M.

    1998-01-01

    A series of ongoing research programs at Georgia Tech established a need for a simulation support tool for aircraft computer-based aids. This led to the design and development of the Georgia Tech Electronic Flight Instrument Research Tool (GT-EFIRT). GT-EFIRT is a part-task flight simulator specifically designed to study aircraft display design and single pilot interaction. ne simulator, using commercially available graphics and Unix workstations, replicates to a high level of fidelity the Electronic Flight Instrument Systems (EFIS), Flight Management Computer (FMC) and Auto Flight Director System (AFDS) of the Boeing 757/767 aircraft. The simulator can be configured to present information using conventional looking B757n67 displays or next generation Primary Flight Displays (PFD) such as found on the Beech Starship and MD-11.

  4. A Modified Artificial Bee Colony Algorithm for p-Center Problems

    Directory of Open Access Journals (Sweden)

    Alkın Yurtkuran

    2014-01-01

    Full Text Available The objective of the p-center problem is to locate p-centers on a network such that the maximum of the distances from each node to its nearest center is minimized. The artificial bee colony algorithm is a swarm-based meta-heuristic algorithm that mimics the foraging behavior of honey bee colonies. This study proposes a modified ABC algorithm that benefits from a variety of search strategies to balance exploration and exploitation. Moreover, random key-based coding schemes are used to solve the p-center problem effectively. The proposed algorithm is compared to state-of-the-art techniques using different benchmark problems, and computational results reveal that the proposed approach is very efficient.

  5. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility

    Directory of Open Access Journals (Sweden)

    Jaschob Daniel

    2012-07-01

    Full Text Available Abstract Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud” and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  6. Parallel Computing:. Some Activities in High Energy Physics

    Science.gov (United States)

    Willers, Ian

    This paper examines some activities in High Energy Physics that utilise parallel computing. The topic includes all computing from the proposed SIMD front end detectors, the farming applications, high-powered RISC processors and the large machines in the computer centers. We start by looking at the motivation behind using parallelism for general purpose computing. The developments around farming are then described from its simplest form to the more complex system in Fermilab. Finally, there is a list of some developments that are happening close to the experiments.

  7. User-centered design in brain-computer interfaces-a case study.

    Science.gov (United States)

    Schreuder, Martijn; Riccio, Angela; Risetti, Monica; Dähne, Sven; Ramsay, Andrew; Williamson, John; Mattia, Donatella; Tangermann, Michael

    2013-10-01

    The array of available brain-computer interface (BCI) paradigms has continued to grow, and so has the corresponding set of machine learning methods which are at the core of BCI systems. The latter have evolved to provide more robust data analysis solutions, and as a consequence the proportion of healthy BCI users who can use a BCI successfully is growing. With this development the chances have increased that the needs and abilities of specific patients, the end-users, can be covered by an existing BCI approach. However, most end-users who have experienced the use of a BCI system at all have encountered a single paradigm only. This paradigm is typically the one that is being tested in the study that the end-user happens to be enrolled in, along with other end-users. Though this corresponds to the preferred study arrangement for basic research, it does not ensure that the end-user experiences a working BCI. In this study, a different approach was taken; that of a user-centered design. It is the prevailing process in traditional assistive technology. Given an individual user with a particular clinical profile, several available BCI approaches are tested and - if necessary - adapted to him/her until a suitable BCI system is found. Described is the case of a 48-year-old woman who suffered from an ischemic brain stem stroke, leading to a severe motor- and communication deficit. She was enrolled in studies with two different BCI systems before a suitable system was found. The first was an auditory event-related potential (ERP) paradigm and the second a visual ERP paradigm, both of which are established in literature. The auditory paradigm did not work successfully, despite favorable preconditions. The visual paradigm worked flawlessly, as found over several sessions. This discrepancy in performance can possibly be explained by the user's clinical deficit in several key neuropsychological indicators, such as attention and working memory. While the auditory paradigm relies

  8. [Activities of Research Institute for Advanced Computer Science

    Science.gov (United States)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2001-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administrations missions. RIACS is located at the NASA Ames Research Center, Moffett Field, California. RIACS research focuses on the three cornerstones of IT research necessary to meet the future challenges of NASA missions: 1. Automated Reasoning for Autonomous Systems Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. 2. Human-Centered Computing Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities. 3. High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to analysis of large scientific datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply IT research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, visiting scientist programs and student summer programs, designed to encourage and facilitate collaboration between the university and NASA IT research communities.

  9. St. Luke's Medical Center: technologizing health care

    International Nuclear Information System (INIS)

    Tumanguil, S.S.

    1994-01-01

    The computerization of the St. Luke's Medical Center improved the hospital administration and management, particularly in nuclear medicine department. The use of computer-aided X-ray simulator machine and computerized linear accelerator machine in diagnosing and treating cancer are the most recent medical technological breakthroughs that benefited thousands of Filipino cancer patients. 4 photos

  10. VACET: Proposed SciDAC2 Visualization and Analytics Center for Enabling Technologies

    International Nuclear Information System (INIS)

    Bethel, W; Johnson, C; Hansen, C; Parker, S; Sanderson, A; Silva, C; Tricoche, X; Pascucci, V; Childs, H; Cohen, J; Duchaineau, M; Laney, D; Lindstrom, P; Ahern, S; Meredith, J; Ostrouchov, G; Joy, K; Hamann, B

    2006-01-01

    This project focuses on leveraging scientific visualization and analytics software technology as an enabling technology for increasing scientific productivity and insight. Advances in computational technology have resulted in an 'information big bang',' which in turn has created a significant data understanding challenge. This challenge is widely acknowledged to be one of the primary bottlenecks in contemporary science. The vision for our Center is to respond directly to that challenge by adapting, extending, creating when necessary and deploying visualization and data understanding technologies for our science stakeholders. Using an organizational model as a Visualization and Analytics Center for Enabling Technologies (VACET), we are well positioned to be responsive to the needs of a diverse set of scientific stakeholders in a coordinated fashion using a range of visualization, mathematics, statistics, computer and computational science and data management technologies

  11. Cloud Computing Databases: Latest Trends and Architectural Concepts

    OpenAIRE

    Tarandeep Singh; Parvinder S. Sandhu

    2011-01-01

    The Economic factors are leading to the rise of infrastructures provides software and computing facilities as a service, known as cloud services or cloud computing. Cloud services can provide efficiencies for application providers, both by limiting up-front capital expenses, and by reducing the cost of ownership over time. Such services are made available in a data center, using shared commodity hardware for computation and storage. There is a varied set of cloud services...

  12. Management of Virtual Machine as an Energy Conservation in Private Cloud Computing System

    Directory of Open Access Journals (Sweden)

    Fauzi Akhmad

    2016-01-01

    Full Text Available Cloud computing is a service model that is packaged in a base computing resources that can be accessed through the Internet on demand and placed in the data center. Data center architecture in cloud computing environments are heterogeneous and distributed, composed of a cluster of network servers with different capacity computing resources in different physical servers. The problems on the demand and availability of cloud services can be solved by fluctuating data center cloud through abstraction with virtualization technology. Virtual machine (VM is a representation of the availability of computing resources that can be dynamically allocated and reallocated on demand. In this study the consolidation of VM as energy conservation in Private Cloud Computing Systems with the target of process optimization selection policy and migration of the VM on the procedure consolidation. VM environment cloud data center to consider hosting a type of service a particular application at the instance VM requires a different level of computing resources. The results of the use of computing resources on a VM that is not balanced in physical servers can be reduced by using a live VM migration to achieve workload balancing. A practical approach used in developing OpenStack-based cloud computing environment by integrating Cloud VM and VM Placement selection procedure using OpenStack Neat VM consolidation. Following the value of CPU Time used as a fill to get the average value in MHz CPU utilization within a specific time period. The average value of a VM’s CPU utilization in getting from the current CPU_time reduced by CPU_time from the previous data retrieval multiplied by the maximum frequency of the CPU. The calculation result is divided by the making time CPU_time when it is reduced to the previous taking time CPU_time multiplied by milliseconds.

  13. Magnetic fusion energy and computers: the role of computing in magnetic fusion energy research and development

    International Nuclear Information System (INIS)

    1979-10-01

    This report examines the role of computing in the Department of Energy magnetic confinement fusion program. The present status of the MFECC and its associated network is described. The third part of this report examines the role of computer models in the main elements of the fusion program and discusses their dependence on the most advanced scientific computers. A review of requirements at the National MFE Computer Center was conducted in the spring of 1976. The results of this review led to the procurement of the CRAY 1, the most advanced scientific computer available, in the spring of 1978. The utilization of this computer in the MFE program has been very successful and is also described in the third part of the report. A new study of computer requirements for the MFE program was conducted during the spring of 1979 and the results of this analysis are presented in the forth part of this report

  14. Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds

    Science.gov (United States)

    Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni

    2012-09-01

    Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.

  15. Human Centered Design and Development for NASA's MerBoard

    Science.gov (United States)

    Trimble, Jay

    2003-01-01

    This viewgraph presentation provides an overview of the design and development process for NASA's MerBoard. These devices are large interactive display screens which can be shown on the user's computer, which will allow scientists in many locations to interpret and evaluate mission data in real-time. These tools are scheduled to be used during the 2003 Mars Exploration Rover (MER) expeditions. Topics covered include: mission overview, Mer Human Centered Computers, FIDO 2001 observations and MerBoard prototypes.

  16. 10th International Conference on Computer Simulation of Radiation Effects in Solids - COSIRES 2010. Abstracts and Programme

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2010-07-01

    COSIRES 2010 is the 10{sup th} International Conference on Computer Simulation of Radiation Effects in Solids. This series of conferences addresses the development and application of advanced computer modeling techniques to the study of phenomena taking place during interaction of energetic particles and clusters (from several eV to some MeV) with solids. Due to the continuous development of new theoretical methodologies and permanent increase of computer power this research field is growing fast. The application of computer simulations leads to a better understanding of basic microscopic processes taking place during and after irradiation. Fundamental understanding of such processes is often not accessible by experimental methods since they occur on very small time and length scales. However, computer simulation techniques are not only used for investigations of basic phenomena but also increasingly applied in the development of modern industrial technologies. Conference topics include, but are not limited to: I) Computer modeling of following phenomena: {center_dot} Sputtering; {center_dot} Formation and evolution of radiation defects in materials; {center_dot} Radiation responses of structural materials important for nuclear and fusion industry; {center_dot} Irradiation-induced evolution of surface topography and ripple formation; {center_dot} Ion beam synthesis of thin films and nanostructures; {center_dot} Ion-, electron and photon-induced physical and chemical effects at surfaces, interfaces and nanostructures; {center_dot} Irradiation-induced charge redistribution, electron excitation and electron-phonon interactions II) Development of new computer modeling protocols and interatomic potentials for investigation of radiation effects. The conference follows previous meetings that were held in Berlin/Germany (1992), Santa Barbara/USA (1994), Guildford/UK (1996), Okayama/Japan (1998), State College/USA (2000), Dresden/Germany (2002), Helsinki/Finland (2004

  17. Spectroscopic and computational study of a nonheme iron nitrosyl center in a biosynthetic model of nitric oxide reductase.

    Science.gov (United States)

    Chakraborty, Saumen; Reed, Julian; Ross, Matthew; Nilges, Mark J; Petrik, Igor D; Ghosh, Soumya; Hammes-Schiffer, Sharon; Sage, J Timothy; Zhang, Yong; Schulz, Charles E; Lu, Yi

    2014-02-24

    A major barrier to understanding the mechanism of nitric oxide reductases (NORs) is the lack of a selective probe of NO binding to the nonheme FeB center. By replacing the heme in a biosynthetic model of NORs, which structurally and functionally mimics NORs, with isostructural ZnPP, the electronic structure and functional properties of the FeB nitrosyl complex was probed. This approach allowed observation of the first S=3/2 nonheme {FeNO}(7) complex in a protein-based model system of NOR. Detailed spectroscopic and computational studies show that the electronic state of the {FeNO}(7) complex is best described as a high spin ferrous iron (S=2) antiferromagnetically coupled to an NO radical (S=1/2) [Fe(2+)-NO(.)]. The radical nature of the FeB -bound NO would facilitate N-N bond formation by radical coupling with the heme-bound NO. This finding, therefore, supports the proposed trans mechanism of NO reduction by NORs. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Computer-aided dispatching system design specification

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, M.G.

    1997-12-16

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol Operations Center. This document reflects the as-built requirements for the system that was delivered by GTE Northwest, Inc. This system provided a commercial off-the-shelf computer-aided dispatching system and alarm monitoring system currently in operations at the Hanford Patrol Operations Center, Building 2721E. This system also provides alarm back-up capability for the Plutonium Finishing Plant (PFP).

  19. Computer-aided dispatching system design specification

    International Nuclear Information System (INIS)

    Briggs, M.G.

    1997-01-01

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol Operations Center. This document reflects the as-built requirements for the system that was delivered by GTE Northwest, Inc. This system provided a commercial off-the-shelf computer-aided dispatching system and alarm monitoring system currently in operations at the Hanford Patrol Operations Center, Building 2721E. This system also provides alarm back-up capability for the Plutonium Finishing Plant (PFP)

  20. Spectrum of tablet computer use by medical students and residents at an academic medical center.

    Science.gov (United States)

    Robinson, Robert

    2015-01-01

    Introduction. The value of tablet computer use in medical education is an area of considerable interest, with preliminary investigations showing that the majority of medical trainees feel that tablet computers added value to the curriculum. This study investigated potential differences in tablet computer use between medical students and resident physicians. Materials & Methods. Data collection for this survey was accomplished with an anonymous online questionnaire shared with the medical students and residents at Southern Illinois University School of Medicine (SIU-SOM) in July and August of 2012. Results. There were 76 medical student responses (26% response rate) and 66 resident/fellow responses to this survey (21% response rate). Residents/fellows were more likely to use tablet computers several times daily than medical students (32% vs. 20%, p = 0.035). The most common reported uses were for accessing medical reference applications (46%), e-Books (45%), and board study (32%). Residents were more likely than students to use a tablet computer to access an electronic medical record (41% vs. 21%, p = 0.010), review radiology images (27% vs. 12%, p = 0.019), and enter patient care orders (26% vs. 3%, p e-Books, and to study for board exams. Residents were more likely to use tablet computers to complete clinical tasks. Conclusions. Tablet computer use among medical students and resident physicians was common in this survey. All learners used tablet computers for point of care references and board study. Resident physicians were more likely to use tablet computers to access the EMR, enter patient care orders, and review radiology studies. This difference is likely due to the differing educational and professional demands placed on resident physicians. Further study is needed better understand how tablet computers and other mobile devices may assist in medical education and patient care.

  1. Data center equipment location and monitoring system

    DEFF Research Database (Denmark)

    2011-01-01

    A data center equipment location system includes both hardware and software to provide for location, monitoring, security and identification of servers and other equipment in equipment racks. The system provides a wired alternative to the wireless RFID tag system by using electronic ID tags...... connected to each piece of equipment, each electronic ID tag connected directly by wires to a equipment rack controller on the equipment rack. The equipment rack controllers then link over a local area network to a central control computer. The central control computer provides an operator interface......, and runs a software application program that communicates with the equipment rack controllers. The software application program of the central control computer stores IDs of the equipment rack controllers and each of its connected electronic ID tags in a database.; The software application program...

  2. Digital computer structure and design

    CERN Document Server

    Townsend, R

    2014-01-01

    Digital Computer Structure and Design, Second Edition discusses switching theory, counters, sequential circuits, number representation, and arithmetic functions The book also describes computer memories, the processor, data flow system of the processor, the processor control system, and the input-output system. Switching theory, which is purely a mathematical concept, centers on the properties of interconnected networks of ""gates."" The theory deals with binary functions of 1 and 0 which can change instantaneously from one to the other without intermediate values. The binary number system is

  3. 78 FR 42080 - Privacy Act of 1974; CMS Computer Match No. 2013-07; HHS Computer Match No. 1303; DoD-DMDC Match...

    Science.gov (United States)

    2013-07-15

    ... 1974; CMS Computer Match No. 2013-07; HHS Computer Match No. 1303; DoD-DMDC Match No. 18 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS). ACTION... Act of 1974, as amended, this notice announces the establishment of a CMP that CMS plans to conduct...

  4. Space Flight Operations Center local area network

    Science.gov (United States)

    Goodman, Ross V.

    1988-01-01

    The existing Mission Control and Computer Center at JPL will be replaced by the Space Flight Operations Center (SFOC). One part of the SFOC is the LAN-based distribution system. The purpose of the LAN is to distribute the processed data among the various elements of the SFOC. The SFOC LAN will provide a robust subsystem that will support the Magellan launch configuration and future project adaptation. Its capabilities include (1) a proven cable medium as the backbone for the entire network; (2) hardware components that are reliable, varied, and follow OSI standards; (3) accurate and detailed documentation for fault isolation and future expansion; and (4) proven monitoring and maintenance tools.

  5. Portability and the National Energy Software Center

    International Nuclear Information System (INIS)

    Butler, M.K.

    1978-01-01

    The software portability problem is examined from the viewpoint of experience gained in the operation of a software exchange and information center. First, the factors contributing to the program interchange to date are identified, then major problem areas remaining are noted. The import of the development of programing language and documentation standards is noted, and the program packaging procedures and dissemination practices employed by the Center to facilitate successful software transport are described. Organization, or installation, dependencies of the computing environment, often hidden from the program author, and data interchange complexities are seen as today's primary issues, with dedicated processors and network communications offering an alternative solution

  6. Institute for Computational Mechanics in Propulsion (ICOMP)

    Science.gov (United States)

    Keith, Theo G., Jr. (Editor); Balog, Karen (Editor); Povinelli, Louis A. (Editor)

    2001-01-01

    The Institute for Computational Mechanics in Propulsion (ICOMP) was formed to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. ICOMP is operated by the Ohio Aerospace Institute (OAI) and funded via numerous cooperative agreements by the NASA Glenn Research Center in Cleveland, Ohio. This report describes the activities at ICOMP during 1999, the Institute's fourteenth year of operation.

  7. Planning and management of cloud computing networks

    Science.gov (United States)

    Larumbe, Federico

    The evolution of the Internet has a great impact on a big part of the population. People use it to communicate, query information, receive news, work, and as entertainment. Its extraordinary usefulness as a communication media made the number of applications and technological resources explode. However, that network expansion comes at the cost of an important power consumption. If the power consumption of telecommunication networks and data centers is considered as the power consumption of a country, it would rank at the 5 th place in the world. Furthermore, the number of servers in the world is expected to grow by a factor of 10 between 2013 and 2020. This context motivates us to study techniques and methods to allocate cloud computing resources in an optimal way with respect to cost, quality of service (QoS), power consumption, and environmental impact. The results we obtained from our test cases show that besides minimizing capital expenditures (CAPEX) and operational expenditures (OPEX), the response time can be reduced up to 6 times, power consumption by 30%, and CO2 emissions by a factor of 60. Cloud computing provides dynamic access to IT resources as a service. In this paradigm, programs are executed in servers connected to the Internet that users access from their computers and mobile devices. The first advantage of this architecture is to reduce the time of application deployment and interoperability, because a new user only needs a web browser and does not need to install software on local computers with specific operating systems. Second, applications and information are available from everywhere and with any device with an Internet access. Also, servers and IT resources can be dynamically allocated depending on the number of users and workload, a feature called elasticity. This thesis studies the resource management of cloud computing networks and is divided in three main stages. We start by analyzing the planning of cloud computing networks to get a

  8. Power Consumption Evaluation of Distributed Computing Network Considering Traffic Locality

    Science.gov (United States)

    Ogawa, Yukio; Hasegawa, Go; Murata, Masayuki

    When computing resources are consolidated in a few huge data centers, a massive amount of data is transferred to each data center over a wide area network (WAN). This results in increased power consumption in the WAN. A distributed computing network (DCN), such as a content delivery network, can reduce the traffic from/to the data center, thereby decreasing the power consumed in the WAN. In this paper, we focus on the energy-saving aspect of the DCN and evaluate its effectiveness, especially considering traffic locality, i.e., the amount of traffic related to the geographical vicinity. We first formulate the problem of optimizing the DCN power consumption and describe the DCN in detail. Then, numerical evaluations show that, when there is strong traffic locality and the router has ideal energy proportionality, the system's power consumption is reduced to about 50% of the power consumed in the case where a DCN is not used; moreover, this advantage becomes even larger (up to about 30%) when the data center is located farthest from the center of the network topology.

  9. CICART Center For Integrated Computation And Analysis Of Reconnection And Turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharjee, Amitava [Univ. of New Hampshire, Durham, NH (United States)

    2016-03-27

    CICART is a partnership between the University of New Hampshire (UNH) and Dartmouth College. CICART addresses two important science needs of the DoE: the basic understanding of magnetic reconnection and turbulence that strongly impacts the performance of fusion plasmas, and the development of new mathematical and computational tools that enable the modeling and control of these phenomena. The principal participants of CICART constitute an interdisciplinary group, drawn from the communities of applied mathematics, astrophysics, computational physics, fluid dynamics, and fusion physics. It is a main premise of CICART that fundamental aspects of magnetic reconnection and turbulence in fusion devices, smaller-scale laboratory experiments, and space and astrophysical plasmas can be viewed from a common perspective, and that progress in understanding in any of these interconnected fields is likely to lead to progress in others. The establishment of CICART has strongly impacted the education and research mission of a new Program in Integrated Applied Mathematics in the College of Engineering and Applied Sciences at UNH by enabling the recruitment of a tenure-track faculty member, supported equally by UNH and CICART, and the establishment of an IBM-UNH Computing Alliance. The proposed areas of research in magnetic reconnection and turbulence in astrophysical, space, and laboratory plasmas include the following topics: (A) Reconnection and secondary instabilities in large high-Lundquist-number plasmas, (B) Particle acceleration in the presence of multiple magnetic islands, (C) Gyrokinetic reconnection: comparison with fluid and particle-in-cell models, (D) Imbalanced turbulence, (E) Ion heating, and (F) Turbulence in laboratory (including fusion-relevant) experiments. These theoretical studies make active use of three high-performance computer simulation codes: (1) The Magnetic Reconnection Code, based on extended two-fluid (or Hall MHD) equations, in an Adaptive Mesh

  10. Carbon Dioxide Information Analysis Center and World Data Center for Atmospheric Trace Gases Fiscal Year 1999 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Cushman, R.M.

    2000-03-31

    The Carbon Dioxide Information Analysis Center (CDIAC), which includes the World Data Center (WDC) for Atmospheric Trace Gases, is the primary global-change data and information analysis center of the Department of Energy (DOE). More than just an archive of data sets and publications, CDIAC has--since its inception in 1982--enhanced the value of its holdings through intensive quality assurance, documentation, and integration. Whereas many traditional data centers are discipline-based (for example, meteorology or oceanography), CDIAC's scope includes potentially anything and everything that would be of value to users concerned with the greenhouse effect and global climate change, including concentrations of carbon dioxide (CO{sub 2}) and other radiatively active gases in the atmosphere; the role of the terrestrial biosphere and the oceans in the biogeochemical cycles of greenhouse gases; emissions of CO{sub 2} and other trace gases to the atmosphere; long-term climate trends; the effects of elevated CO{sub 2} on vegetation; and the vulnerability of coastal areas to rising sea level. CDIAC is located within the Environmental Sciences Division (ESD) at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee. CDIAC is co-located with ESD researchers investigating global-change topics, such as the global carbon cycle and the effects of carbon dioxide on vegetation. CDIAC staff are also connected with current ORNL research on related topics, such as renewable energy and supercomputing technologies. CDIAC is supported by the Environmental Sciences Division (Jerry Elwood, Acting Director) of DOE's Office of Biological and Environmental Research. CDIAC's FY 1999 budget was 2.2M dollars. CDIAC represents the DOE in the multi-agency Global Change Data and Information System. Bobbi Parra, and Wanda Ferrell on an interim basis, is DOE's Program Manager with responsibility for CDIAC. CDIAC comprises three groups, Global Change Data, Computer Systems, and

  11. Blast forecasting guide for the Site 300 Meteorology Center

    International Nuclear Information System (INIS)

    Odell, B.N.; Pfeifer, H.E.; Arganbright, V.E.

    1978-01-01

    These step-by-step procedures enable an occasional operator to run the Site 300 Meteorological Center. The primary function of the Center is to determine the maximum weight of high explosives that can be fired at Site 300 under any given meteorological conditions. A secondary function is to supply weather data for other programs such as ARAC (Atmospheric Release Advisory Capability). Included in the primary function are radar and theodolite operations for balloon tracking; calculation of temperatures for various altitudes using Oakland weather obtained from a teletype; computer terminal operation to obtain wind directions, wind velocities, temperatures, and pressure at various altitudes; and methods to determine high-explosive weight limits for simple inversions and focus conditions using pressure-versus-altitude information obtained from the computer. General information is included such as names, telephone numbers, and addresses of maintenance personnel, additional sources of weather information, chart suppliers, balloons, spare parts, etc

  12. Blast forecasting guide for the Site 300 Meteorology Center

    Energy Technology Data Exchange (ETDEWEB)

    Odell, B.N.; Pfeifer, H.E.; Arganbright, V.E.

    1978-06-01

    These step-by-step procedures enable an occasional operator to run the Site 300 Meteorological Center. The primary function of the Center is to determine the maximum weight of high explosives that can be fired at Site 300 under any given meteorological conditions. A secondary function is to supply weather data for other programs such as ARAC (Atmospheric Release Advisory Capability). Included in the primary function are radar and theodolite operations for balloon tracking; calculation of temperatures for various altitudes using Oakland weather obtained from a teletype; computer terminal operation to obtain wind directions, wind velocities, temperatures, and pressure at various altitudes; and methods to determine high-explosive weight limits for simple inversions and focus conditions using pressure-versus-altitude information obtained from the computer. General information is included such as names, telephone numbers, and addresses of maintenance personnel, additional sources of weather information, chart suppliers, balloons, spare parts, etc.

  13. 78 FR 48169 - Privacy Act of 1974; CMS Computer Match No. 2013-02; HHS Computer Match No. 1306; DoD-DMDC Match...

    Science.gov (United States)

    2013-08-07

    ... 1974; CMS Computer Match No. 2013-02; HHS Computer Match No. 1306; DoD-DMDC Match No. 12 AGENCY: Department of Health and Human Services (HHS), Centers for Medicare & Medicaid Services (CMS). ACTION: Notice... of 1974, as amended, this notice establishes a CMP that CMS plans to conduct with the Department of...

  14. Climate Prediction Center (CPC) Palmer Drought and Crop Moisture Indices

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Climate Prediction Center (CPC) Palmer Drought Severity and Crop Moisture Indices are computed for the 344 U.S. Climate Divisions on a weekly basis based on a...

  15. [Geometry, analysis, and computation in mathematics and applied science]. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, D.

    1994-02-01

    The principal investigators` work on a variety of pure and applied problems in Differential Geometry, Calculus of Variations and Mathematical Physics has been done in a computational laboratory and been based on interactive scientific computer graphics and high speed computation created by the principal investigators to study geometric interface problems in the physical sciences. We have developed software to simulate various physical phenomena from constrained plasma flow to the electron microscope imaging of the microstructure of compound materials, techniques for the visualization of geometric structures that has been used to make significant breakthroughs in the global theory of minimal surfaces, and graphics tools to study evolution processes, such as flow by mean curvature, while simultaneously developing the mathematical foundation of the subject. An increasingly important activity of the laboratory is to extend this environment in order to support and enhance scientific collaboration with researchers at other locations. Toward this end, the Center developed the GANGVideo distributed video software system and software methods for running lab-developed programs simultaneously on remote and local machines. Further, the Center operates a broadcast video network, running in parallel with the Center`s data networks, over which researchers can access stored video materials or view ongoing computations. The graphical front-end to GANGVideo can be used to make ``multi-media mail`` from both ``live`` computing sessions and stored materials without video editing. Currently, videotape is used as the delivery medium, but GANGVideo is compatible with future ``all-digital`` distribution systems. Thus as a byproduct of mathematical research, we are developing methods for scientific communication. But, most important, our research focuses on important scientific problems; the parallel development of computational and graphical tools is driven by scientific needs.

  16. Top scientific research center deploys Zambeel Aztera (TM) network storage system in high performance environment

    CERN Multimedia

    2002-01-01

    " The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory has implemented a Zambeel Aztera storage system and software to accelerate the productivity of scientists running high performance scientific simulations and computations" (1 page).

  17. Green Computing: Need of the Hour

    Science.gov (United States)

    Jena, Rabindra Ku

    Environmental and energy conservation issues have taken center stage in the global business arena in recent years. The reality of rising energy costs and their impact on international affairs coupled with the increased concern over the global warming climate crisis and other environmental issues have shifted the social and economic consciousness of the business community. This paper discusses the green computing needs and also studies the participation of different stockholders for implementation of green computing concepts in India.

  18. Computer models for kinetic equations of magnetically confined plasmas

    International Nuclear Information System (INIS)

    Killeen, J.; Kerbel, G.D.; McCoy, M.G.; Mirin, A.A.; Horowitz, E.J.; Shumaker, D.E.

    1987-01-01

    This paper presents four working computer models developed by the computational physics group of the National Magnetic Fusion Energy Computer Center. All of the models employ a kinetic description of plasma species. Three of the models are collisional, i.e., they include the solution of the Fokker-Planck equation in velocity space. The fourth model is collisionless and treats the plasma ions by a fully three-dimensional particle-in-cell method

  19. Chest X ray effective doses estimation in computed radiography

    International Nuclear Information System (INIS)

    Abdalla, Esra Abdalrhman Dfaalla

    2013-06-01

    Conventional chest radiography is technically difficult because of wide in tissue attenuations in the chest and limitations of screen-film systems. Computed radiography (CR) offers a different approach utilizing a photostimulable phosphor. photostimulable phosphors overcome some image quality limitations of chest imaging. The objective of this study was to estimate the effective dose in computed radiography at three hospitals in Khartoum. This study has been conducted in radiography departments in three centres Advanced Diagnostic Center, Nilain Diagnostic Center, Modern Diagnostic Center. The entrance surface dose (ESD) measurement was conducted for quality control of x-ray machines and survey of operators experimental techniques. The ESDs were measured by UNFORS dosimeter and mathematical equations to estimate patient doses during chest X rays. A total of 120 patients were examined in three centres, among them 62 were males and 58 were females. The overall mean and range of patient dosed was 0.073±0.037 (0.014-0.16) mGy per procedure while the effective dose was 3.4±01.7 (0.6-7.0) mSv per procedure. This study compared radiation doses to patients radiographic examinations of chest using computed radiology. The radiation dose was measured in three centres in Khartoum- Sudan. The results of the measured effective dose showed that the dose in chest radiography was lower in computed radiography compared to previous studies.(Author)

  20. Report on Computing and Networking in the Space Science Laboratory by the SSL Computer Committee

    Science.gov (United States)

    Gallagher, D. L. (Editor)

    1993-01-01

    The Space Science Laboratory (SSL) at Marshall Space Flight Center is a multiprogram facility. Scientific research is conducted in four discipline areas: earth science and applications, solar-terrestrial physics, astrophysics, and microgravity science and applications. Representatives from each of these discipline areas participate in a Laboratory computer requirements committee, which developed this document. The purpose is to establish and discuss Laboratory objectives for computing and networking in support of science. The purpose is also to lay the foundation for a collective, multiprogram approach to providing these services. Special recognition is given to the importance of the national and international efforts of our research communities toward the development of interoperable, network-based computer applications.

  1. Research on Using the Naturally Cold Air and the Snow for Data Center Air-conditioning, and Humidity Control

    Science.gov (United States)

    Tsuda, Kunikazu; Tano, Shunichi; Ichino, Junko

    To lower power consumption has becomes a worldwide concern. It is also becoming a bigger area in Computer Systems, such as reflected by the growing use of software-as-a-service and cloud computing whose market has increased since 2000, at the same time, the number of data centers that accumulates and manages the computer has increased rapidly. Power consumption at data centers is accounts for a big share of the entire IT power usage, and is still rapidly increasing. This research focuses on the air-conditioning that occupies accounts for the biggest portion of electric power consumption by data centers, and proposes to develop a technique to lower the power consumption by applying the natural cool air and the snow for control temperature and humidity. We verify those effectiveness of this approach by the experiment. Furthermore, we also examine the extent to which energy reduction is possible when a data center is located in Hokkaido.

  2. a Recursive Approach to Compute Normal Forms

    Science.gov (United States)

    HSU, L.; MIN, L. J.; FAVRETTO, L.

    2001-06-01

    Normal forms are instrumental in the analysis of dynamical systems described by ordinary differential equations, particularly when singularities close to a bifurcation are to be characterized. However, the computation of a normal form up to an arbitrary order is numerically hard. This paper focuses on the computer programming of some recursive formulas developed earlier to compute higher order normal forms. A computer program to reduce the system to its normal form on a center manifold is developed using the Maple symbolic language. However, it should be stressed that the program relies essentially on recursive numerical computations, while symbolic calculations are used only for minor tasks. Some strategies are proposed to save computation time. Examples are presented to illustrate the application of the program to obtain high order normalization or to handle systems with large dimension.

  3. The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency

    Science.gov (United States)

    Oder, Karl; Pittman, Stephanie

    2015-01-01

    Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…

  4. An Ontology-Based Architecture for Adaptive Work-Centered User Interface Technology

    National Research Council Canada - National Science Library

    Aragones, Amy; Bruno, Jeanette; Crapo, Andrew; Garbiras, Marc

    2005-01-01

    .... The first concept is to use an ontology modeling approach to characterize a work domain in terms of "work-centered" activities as well as the computation mechanisms that achieve an implementation...

  5. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    Science.gov (United States)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte Carlo Simulation in SCEAPI and have been providing CPU power since fall 2015.

  6. Production Management System for AMS Computing Centres

    Science.gov (United States)

    Choutko, V.; Demakov, O.; Egorov, A.; Eline, A.; Shan, B. S.; Shi, R.

    2017-10-01

    The Alpha Magnetic Spectrometer [1] (AMS) has collected over 95 billion cosmic ray events since it was installed on the International Space Station (ISS) on May 19, 2011. To cope with enormous flux of events, AMS uses 12 computing centers in Europe, Asia and North America, which have different hardware and software configurations. The centers are participating in data reconstruction, Monte-Carlo (MC) simulation [2]/Data and MC production/as well as in physics analysis. Data production management system has been developed to facilitate data and MC production tasks in AMS computing centers, including job acquiring, submitting, monitoring, transferring, and accounting. It was designed to be modularized, light-weighted, and easy-to-be-deployed. The system is based on Deterministic Finite Automaton [3] model, and implemented by script languages, Python and Perl, and the built-in sqlite3 database on Linux operating systems. Different batch management systems, file system storage, and transferring protocols are supported. The details of the integration with Open Science Grid are presented as well.

  7. Mean centering, multicollinearity, and moderators in multiple regression: The reconciliation redux.

    Science.gov (United States)

    Iacobucci, Dawn; Schneider, Matthew J; Popovich, Deidre L; Bakamitsos, Georgios A

    2017-02-01

    In this article, we attempt to clarify our statements regarding the effects of mean centering. In a multiple regression with predictors A, B, and A × B (where A × B serves as an interaction term), mean centering A and B prior to computing the product term can clarify the regression coefficients (which is good) and the overall model fit R 2 will remain undisturbed (which is also good).

  8. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  9. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  10. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  11. Lightness : a function-virtualizable software defined data center network with all-optical circuit/packet switching

    NARCIS (Netherlands)

    Saridis, G.; Peng, S.; Yan, Y.; Aguado, A.; Guo, B.; Arslan, M.; Jackson, C.; Miao, W.; Calabretta, N.; Agraz, F.; Spadaro, S.; Bernini, G.; Ciulli, N.; Zervas, G.; Nejabati, R.; Simeonidou, D.

    2016-01-01

    Modern high-performance Data Centers are responsible for delivering a huge variety of cloud applications to the end-users, which are increasingly pushing the limits of currently deployed computing and network infrastructure. All-optical dynamic data center network (DCN) architectures are strong

  12. Python for Scientific Computing Education: Modeling of Queueing Systems

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2014-01-01

    Full Text Available In this paper, we present the methodology for the introduction to scientific computing based on model-centered learning. We propose multiphase queueing systems as a basis for learning objects. We use Python and parallel programming for implementing the models and present the computer code and results of stochastic simulations.

  13. A source-controlled data center network model.

    Science.gov (United States)

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS.

  14. A source-controlled data center network model

    Science.gov (United States)

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS. PMID:28328925

  15. Institute for Computational Mechanics in Propulsion (ICOMP). 10

    Science.gov (United States)

    Keith, Theo G., Jr. (Editor); Balog, Karen (Editor); Povinelli, Louis A. (Editor)

    1996-01-01

    The Institute for Computational Mechanics in Propulsion (ICOMP) is operated by the Ohio Aerospace Institute (OAI) and funded under a cooperative agreement by the NASA Lewis Research Center in Cleveland, Ohio. The purpose of ICOMP is to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. This report describes the activities at ICOUP during 1995.

  16. BaBar computing - From collisions to physics results

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    The BaBar experiment at SLAC studies B-physics at the Upsilon(4S) resonance using the high-luminosity e+e- collider PEP-II at the Stanford Linear Accelerator Center (SLAC). Taking, processing and analyzing the very large data samples is a significant computing challenge. This presentation will describe the entire BaBar computing chain and illustrate the solutions chosen as well as their evolution with the ever higher luminosity being delivered by PEP-II. This will include data acquisition and software triggering in a high availability, low-deadtime online environment, a prompt, automated calibration pass through the data SLAC and then the full reconstruction of the data that takes place at INFN-Padova within 24 hours. Monte Carlo production takes place in a highly automated fashion in 25+ sites. The resulting real and simulated data is distributed and made available at SLAC and other computing centers. For analysis a much more sophisticated skimming pass has been introduced in the past year, ...

  17. Montessori Transformation at Computer Associates.

    Science.gov (United States)

    Mars, Lisa

    2002-01-01

    Describes the growth of the all-day Montessori program for children ages 6 weeks to 6 years at Computer Associates' corporate headquarters and multiple sites worldwide. Focuses on placement of AMI Montessori-trained teachers, refurbishing of the child development centers to fit Montessori specifications, and the Nido--the children's community--and…

  18. Center for modeling of turbulence and transition: Research briefs, 1995

    Science.gov (United States)

    1995-10-01

    This research brief contains the progress reports of the research staff of the Center for Modeling of Turbulence and Transition (CMOTT) from July 1993 to July 1995. It also constitutes a progress report to the Institute of Computational Mechanics in Propulsion located at the Ohio Aerospace Institute and the Lewis Research Center. CMOTT has been in existence for about four years. In the first three years, its main activities were to develop and validate turbulence and combustion models for propulsion systems, in an effort to remove the deficiencies of existing models. Three workshops on computational turbulence modeling were held at LeRC (1991, 1993, 1994). At present, CMOTT is integrating the CMOTT developed/improved models into CFD tools which can be used by the propulsion systems community. This activity has resulted in an increased collaboration with the Lewis CFD researchers.

  19. Proceedings of the meeting on large scale computer simulation research

    International Nuclear Information System (INIS)

    2004-04-01

    The meeting to summarize the collaboration activities for FY2003 on the Large Scale Computer Simulation Research was held January 15-16, 2004 at Theory and Computer Simulation Research Center, National Institute for Fusion Science. Recent simulation results, methodologies and other related topics were presented. (author)

  20. Generalized bibliographic format as used by the Ecological Sciences Information Center

    International Nuclear Information System (INIS)

    Allison, L.J.; Pfuderer, H.A.; Collier, B.N.

    1979-03-01

    The purpose of this document is to provide guidance for the preparation of computer input for the information programs being developed by the Ecological Sciences Information Center (ESIC)/Information Center Complex (ICC) of the Oak Ridge National Laboratory (ORNL). Through the use of a generalized system, the data of all the centers of ICC are compatible. Literature included in an information data base has a number of identifying characteristics. Each of these characteristics or data fields can be recognized and searched by the computer. The information for each field must have an alphanumeric label or field descriptor. All of the labels presently used are sets of upper-case letters approximating the name of the field they represent. Presently, there are 69 identified fields; additional fields may be included in the future. The format defined here is designed to facilitate the input of information to the ADSEP program. This program processes data for the ORNL on-line (ORLOOK) search system and is a special case of the ADSEP text input option

  1. Generalized bibliographic format as used by the Ecological Sciences Information Center

    Energy Technology Data Exchange (ETDEWEB)

    Allison, L.J.; Pfuderer, H.A.; Collier, B.N.

    1979-03-01

    The purpose of this document is to provide guidance for the preparation of computer input for the information programs being developed by the Ecological Sciences Information Center (ESIC)/Information Center Complex (ICC) of the Oak Ridge National Laboratory (ORNL). Through the use of a generalized system, the data of all the centers of ICC are compatible. Literature included in an information data base has a number of identifying characteristics. Each of these characteristics or data fields can be recognized and searched by the computer. The information for each field must have an alphanumeric label or field descriptor. All of the labels presently used are sets of upper-case letters approximating the name of the field they represent. Presently, there are 69 identified fields; additional fields may be included in the future. The format defined here is designed to facilitate the input of information to the ADSEP program. This program processes data for the ORNL on-line (ORLOOK) search system and is a special case of the ADSEP text input option.

  2. The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers

    International Nuclear Information System (INIS)

    Bonacorsi, D; Bartolome, I Cabrillo; Matorras, F; Gonzalez Caballero, I; Sartirana, A

    2010-01-01

    Approaching LHC data taking, the CMS experiment is deploying, commissioning and operating the building tools of its grid-based computing infrastructure. The commissioning program includes testing, deployment and operation of various storage solutions to support the computing workflows of the experiment. Recently, some of the Tier-1 and Tier-2 centers supporting the collaboration have started to deploy StoRM based storage systems. These are POSIX-based disk storage systems on top of which StoRM implements the Storage Resource Manager (SRM) version 2 interface allowing for a standard-based access from the Grid. In this notes we briefly describe the experience so far achieved at the CNAF Tier-1 center and at the IFCA Tier-2 center.

  3. Parallel neural pathways in higher visual centers of the Drosophila brain that mediate wavelength-specific behavior

    Directory of Open Access Journals (Sweden)

    Hideo eOtsuna

    2014-02-01

    Full Text Available Compared with connections between the retinae and primary visual centers, relatively less is known in both mammals and insects about the functional segregation of neural pathways connecting primary and higher centers of the visual processing cascade. Here, using the Drosophila visual system as a model, we demonstrate two levels of parallel computation in the pathways that connect primary visual centers of the optic lobe to computational circuits embedded within deeper centers in the central brain. We show that a seemingly simple achromatic behavior, namely phototaxis, is under the control of several independent pathways, each of which is responsible for navigation towards unique wavelengths. Silencing just one pathway is enough to disturb phototaxis towards one characteristic monochromatic source, whereas phototactic behavior towards white light is not affected. The response spectrum of each demonstrable pathway is different from that of individual photoreceptors, suggesting subtractive computations. A choice assay between two colors showed that these pathways are responsible for navigation towards, but not for the detection itself of, the monochromatic light. The present study provides novel insights about how visual information is separated and processed in parallel to achieve robust control of an innate behavior.

  4. Comparison of canal transportation and centering ability of rotary protaper, one shape system and wave one system using cone beam computed tomography: An in vitro study

    Science.gov (United States)

    Tambe, Varsha Harshal; Nagmode, Pradnya Sunil; Abraham, Sathish; Patait, Mahendra; Lahoti, Pratik Vinod; Jaju, Neha

    2014-01-01

    Aim: The aim of the present study was to compare the canal transportation and centering ability of Rotary ProTaper, One Shape and Wave One systems using cone beam computed tomography (CBCT) in curved root canals to find better instrumentation technique for maintaining root canal geometry. Materials and Methods: Total 30 freshly extracted premolars having curved root canals with at least 10 degrees of curvature were divided into three groups of 10 teeth each. All teeth were scanned by CBCT to determine the root canal shape before instrumentation. In Group 1, the canals were prepared with Rotary ProTaper files, in Group 2 the canals were prepared with One Shape files and in Group 3 canals were prepared with Wave One files. After preparation, post-instrumentation scan was performed. Pre-instrumentation and post-instrumentation images were obtained at three levels, 3 mm apical, 3 mm coronal and 8 mm apical above the apical foramen were compared using CBCT software. Amount of transportation and centering ability were assessed. The three groups were statistically compared with analysis of variance and Tukey honestly significant. Results: All instruments maintained the original canal curvature with significant differences between the different files. Data suggested that Wave One files presented the best outcomes for both the variables evaluated. Wave One files caused lesser transportation and remained better centered in the canal than One Shape and Rotary ProTaper files. Conclusion: The canal preparation with Wave One files showed lesser transportation and better centering ability than One Shape and ProTaper. PMID:25506145

  5. Mathematical modeling and computational intelligence in engineering applications

    CERN Document Server

    Silva Neto, Antônio José da; Silva, Geraldo Nunes

    2016-01-01

    This book brings together a rich selection of studies in mathematical modeling and computational intelligence, with application in several fields of engineering, like automation, biomedical, chemical, civil, electrical, electronic, geophysical and mechanical engineering, on a multidisciplinary approach. Authors from five countries and 16 different research centers contribute with their expertise in both the fundamentals and real problems applications based upon their strong background on modeling and computational intelligence. The reader will find a wide variety of applications, mathematical and computational tools and original results, all presented with rigorous mathematical procedures. This work is intended for use in graduate courses of engineering, applied mathematics and applied computation where tools as mathematical and computational modeling, numerical methods and computational intelligence are applied to the solution of real problems.

  6. NOAA/West coast and Alaska Tsunami warning center Atlantic Ocean response criteria

    Science.gov (United States)

    Whitmore, P.; Refidaff, C.; Caropolo, M.; Huerfano-Moreno, V.; Knight, W.; Sammler, W.; Sandrik, A.

    2009-01-01

    West Coast/Alaska Tsunami Warning Center (WCATWC) response criteria for earthquakesoccurring in the Atlantic and Caribbean basins are presented. Initial warning center decisions are based on an earthquake's location, magnitude, depth, distance from coastal locations, and precomputed threat estimates based on tsunami models computed from similar events. The new criteria will help limit the geographical extent of warnings and advisories to threatened regions, and complement the new operational tsunami product suite. Criteria are set for tsunamis generated by earthquakes, which are by far the main cause of tsunami generation (either directly through sea floor displacement or indirectly by triggering of sub-sea landslides).The new criteria require development of a threat data base which sets warning or advisory zones based on location, magnitude, and pre-computed tsunami models. The models determine coastal tsunami amplitudes based on likely tsunami source parameters for a given event. Based on the computed amplitude, warning and advisory zones are pre-set.

  7. Virtual Meteorological Center

    Directory of Open Access Journals (Sweden)

    Marius Brinzila

    2007-10-01

    Full Text Available A virtual meteorological center, computer based with Internet possibility transmission of the information is presented. Circumstance data is collected with logging field meteorological station. The station collects and automatically save data about the temperature in the air, relative humidity, pressure, wind speed and wind direction, rain gauge, solar radiation and air quality. Also can perform sensors test, analyze historical data and evaluate statistical information. The novelty of the system is that it can publish data over the Internet using LabVIEW Web Server capabilities and deliver a video signal to the School TV network. Also the system performs redundant measurement of temperature and humidity and was improved using new sensors and an original signal conditioning module.

  8. Human Computation An Integrated Approach to Learning from the Crowd

    CERN Document Server

    Law, Edith

    2011-01-01

    Human computation is a new and evolving research area that centers around harnessing human intelligence to solve computational problems that are beyond the scope of existing Artificial Intelligence (AI) algorithms. With the growth of the Web, human computation systems can now leverage the abilities of an unprecedented number of people via the Web to perform complex computation. There are various genres of human computation applications that exist today. Games with a purpose (e.g., the ESP Game) specifically target online gamers who generate useful data (e.g., image tags) while playing an enjoy

  9. Clinical utility of dental cone-beam computed tomography: current perspectives

    Directory of Open Access Journals (Sweden)

    Jaju PP

    2014-04-01

    Full Text Available Prashant P Jaju,1 Sushma P Jaju21Oral Medicine and Radiology, 2Conservative Dentistry and Endodontics, Rishiraj College of Dental Sciences and Research Center, Bhopal, IndiaAbstract: Panoramic radiography and computed tomography were the pillars of maxillofacial diagnosis. With the advent of cone-beam computed tomography, dental practice has seen a paradigm shift. This review article highlights the potential applications of cone-beam computed tomography in the fields of dental implantology and forensic dentistry, and its limitations in maxillofacial diagnosis.Keywords: dental implants, cone-beam computed tomography, panoramic radiography, computed tomography

  10. Transformation of topologically close-packed β-W to body-centered cubic α-W: Comparison of experiments and computations.

    Science.gov (United States)

    Barmak, Katayun; Liu, Jiaxing; Harlan, Liam; Xiao, Penghao; Duncan, Juliana; Henkelman, Graeme

    2017-10-21

    The enthalpy and activation energy for the transformation of the metastable form of tungsten, β-W, which has the topologically close-packed A15 structure (space group Pm3¯n), to equilibrium α-W, which is body-centered cubic (A2, space group Im3¯m), was measured using differential scanning calorimetry. The β-W films were 1 μm-thick and were prepared by sputter deposition in argon with a small amount of nitrogen. The transformation enthalpy was measured as -8.3 ± 0.4 kJ/mol (-86 ± 4 meV/atom) and the transformation activation energy as 2.2 ± 0.1 eV. The measured enthalpy was found to agree well with the difference in energies of α and β tungsten computed using density functional theory, which gave a value of -82 meV/atom for the transformation enthalpy. A calculated concerted transformation mechanism with a barrier of 0.4 eV/atom, in which all the atoms in an A15 unit cell transform into A2, was found to be inconsistent with the experimentally measured activation energy for any critical nucleus larger than two A2 unit cells. Larger calculations of eight A15 unit cells spontaneously relax to a mechanism in which part of the supercell first transforms from A15 to A2, creating a phase boundary, before the remaining A15 transforms into the A2 phase. Both calculations indicate that a nucleation and growth mechanism is favored over a concerted transformation. More consistent with the experimental activation energy was that of a calculated local transformation mechanism at the A15-A2 phase boundary, computed as 1.7 eV using molecular dynamics simulations. This calculated phase transformation mechanism involves collective rearrangements of W atoms in the disordered interface separating the A15 and A2 phases.

  11. Jackson State University's Center for Spatial Data Research and Applications: New facilities and new paradigms

    Science.gov (United States)

    Davis, Bruce E.; Elliot, Gregory

    1989-01-01

    Jackson State University recently established the Center for Spatial Data Research and Applications, a Geographical Information System (GIS) and remote sensing laboratory. Taking advantage of new technologies and new directions in the spatial (geographic) sciences, JSU is building a Center of Excellence in Spatial Data Management. New opportunities for research, applications, and employment are emerging. GIS requires fundamental shifts and new demands in traditional computer science and geographic training. The Center is not merely another computer lab but is one setting the pace in a new applied frontier. GIS and its associated technologies are discussed. The Center's facilities are described. An ARC/INFO GIS runs on a Vax mainframe, with numerous workstations. Image processing packages include ELAS, LIPS, VICAR, and ERDAS. A host of hardware and software peripheral are used in support. Numerous projects are underway, such as the construction of a Gulf of Mexico environmental data base, development of AI in image processing, a land use dynamics study of metropolitan Jackson, and others. A new academic interdisciplinary program in Spatial Data Management is under development, combining courses in Geography and Computer Science. The broad range of JSU's GIS and remote sensing activities is addressed. The impacts on changing paradigms in the university and in the professional world conclude the discussion.

  12. Carbon Dioxide Information Analysis Center and World Data Center for Atmospheric Trace Gases Fiscal Year 2001 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Cushman, R.M.

    2002-10-15

    The Carbon Dioxide Information Analysis Center (CDIAC), which includes the World Data Center (WDC) for Atmospheric Trace Gases, is the primary global change data and information analysis center of the U.S. Department of Energy (DOE). More than just an archive of data sets and publications, CDIAC has, since its inception in 1982, enhanced the value of its holdings through intensive quality assurance, documentation, and integration. Whereas many traditional data centers are discipline-based (for example, meteorology or oceanography), CDIAC's scope includes potentially anything and everything that would be of value to users concerned with the greenhouse effect and global climate change, including concentrations of carbon dioxide (CO{sub 2}) and other radiatively active gases in the atmosphere; the role of the terrestrial biosphere and the oceans in the biogeochemical cycles of greenhouse gases; emissions of CO{sub 2} and other trace gases to the atmosphere; long-term climate trends; the effects of elevated CO{sub 2} on vegetation; and the vulnerability of coastal areas to rising sea levels. CDIAC is located within the Environmental Sciences Division (ESD) at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee. CDIAC is co-located with ESD researchers investigating global-change topics, such as the global carbon cycle and the effects of carbon dioxide on climate and vegetation. CDIAC staff are also connected with current ORNL research on related topics, such as renewable energy and supercomputing technologies. CDIAC is supported by the Environmental Sciences Division (Jerry Elwood, Director) of DOE's Office of Biological and Environmental Research. CDIAC represents DOE in the multi-agency Global Change Data and Information System (GCDIS). Wanda Ferrell is DOE's Program Manager with overall responsibility for CDIAC. Roger Dahlman is responsible for CDIAC's AmeriFlux tasks, and Anna Palmisano for CDIAC's Ocean Data tasks. CDIAC is made

  13. Measurements and predictions of the air distribution systems in high compute density (Internet) data centers

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jinkyun [HIMEC (Hanil Mechanical Electrical Consultants) Ltd., Seoul 150-103 (Korea); Department of Architectural Engineering, Yonsei University, Seoul 120-749 (Korea); Lim, Taesub; Kim, Byungseon Sean [Department of Architectural Engineering, Yonsei University, Seoul 120-749 (Korea)

    2009-10-15

    When equipment power density increases, a critical goal of a data center cooling system is to separate the equipment exhaust air from the equipment intake air in order to prevent the IT server from overheating. Cooling systems for data centers are primarily differentiated according to the way they distribute air. The six combinations of flooded and locally ducted air distribution make up the vast majority of all installations, except fully ducted air distribution methods. Once the air distribution system (ADS) is selected, there are other elements that must be integrated into the system design. In this research, the design parameters and IT environmental aspects of the cooling system were studied with a high heat density data center. CFD simulation analysis was carried out in order to compare the heat removal efficiencies of various air distribution systems. The IT environment of an actual operating data center is measured to validate a model for predicting the effect of different air distribution systems. A method for planning and design of the appropriate air distribution system is described. IT professionals versed in precision air distribution mechanisms, components, and configurations can work more effectively with mechanical engineers to ensure the specification and design of optimized cooling solutions. (author)

  14. A Variable Service Broker Routing Policy for data center selection in cloud analyst

    Directory of Open Access Journals (Sweden)

    Ahmad M. Manasrah

    2017-07-01

    Full Text Available Cloud computing depends on sharing distributed computing resources to handle different services such as servers, storage and applications. The applications and infrastructures are provided as pay per use services through data center to the end user. The data centers are located at different geographic locations. However, these data centers can get overloaded with the increase number of client applications being serviced at the same time and location; this will degrade the overall QoS of the distributed services. Since different user applications may require different configuration and requirements, measuring the user applications performance of various resources is challenging. The service provider cannot make decisions for the right level of resources. Therefore, we propose a Variable Service Broker Routing Policy – VSBRP, which is a heuristic-based technique that aims to achieve minimum response time through considering the communication channel bandwidth, latency and the size of the job. The proposed service broker policy will also reduce the overloading of the data centers by redirecting the user requests to the next data center that yields better response and processing time. The simulation shows promising results in terms of response and processing time compared to other known broker policies from the literature.

  15. Examining the Computer Self-Efficacy Perceptions of Gifted Students

    Science.gov (United States)

    Kaplan, Abdullah; Öztürk, Mesut; Doruk, Muhammet; Yilmaz, Alper

    2013-01-01

    This study was conducted in order to determine the computer self-efficacy perceptions of gifted students. The research group of this study is composed of gifted students (N = 36) who were studying at the Science and Arts Center in Gümüshane province in the spring semester of the 2012-2013 academic year. The "Computer Self-Efficacy Perception…

  16. Coordinating Center: Molecular and Cellular Findings of Screen-Detected Lesions | Division of Cancer Prevention

    Science.gov (United States)

    The Molecular and Cellular Characterization of Screen‐Detected Lesions ‐ Coordinating Center and Data Management Group will provide support for the participating studies responding to RFA CA14‐10. The coordinating center supports three main domains: network coordination, statistical support and computational analysis and protocol development and database support. Support for

  17. MCNP(TM) Release 6.1.1 beta: Creating and Testing the Code Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Cox, Lawrence J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Casswell, Laura [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-06-12

    This report documents the preparations for and testing of the production release of MCNP6™1.1 beta through RSICC at ORNL. It addresses tests on supported operating systems (Linux, MacOSX, Windows) with the supported compilers (Intel, Portland Group and gfortran). Verification and Validation test results are documented elsewhere. This report does not address in detail the overall packaging of the distribution. Specifically, it does not address the nuclear and atomic data collection, the other included software packages (MCNP5, MCNPX and MCNP6) and the collection of reference documents.

  18. Reference Architecture for Multi-Layer Software Defined Optical Data Center Networks

    Directory of Open Access Journals (Sweden)

    Casimer DeCusatis

    2015-09-01

    Full Text Available As cloud computing data centers grow larger and networking devices proliferate; many complex issues arise in the network management architecture. We propose a framework for multi-layer; multi-vendor optical network management using open standards-based software defined networking (SDN. Experimental results are demonstrated in a test bed consisting of three data centers interconnected by a 125 km metropolitan area network; running OpenStack with KVM and VMW are components. Use cases include inter-data center connectivity via a packet-optical metropolitan area network; intra-data center connectivity using an optical mesh network; and SDN coordination of networking equipment within and between multiple data centers. We create and demonstrate original software to implement virtual network slicing and affinity policy-as-a-service offerings. Enhancements to synchronous storage backup; cloud exchanges; and Fibre Channel over Ethernet topologies are also discussed.

  19. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  20. Projected Applications of a ``Climate in a Box'' Computing System at the NASA Short-term Prediction Research and Transition (SPoRT) Center

    Science.gov (United States)

    Jedlovec, G.; Molthan, A.; Zavodsky, B.; Case, J.; Lafontaine, F.

    2010-12-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center focuses on the transition of unique observations and research capabilities to the operational weather community, with a goal of improving short-term forecasts on a regional scale. Advances in research computing have lead to “Climate in a Box” systems, with hardware configurations capable of producing high resolution, near real-time weather forecasts, but with footprints, power, and cooling requirements that are comparable to desktop systems. The SPoRT Center has developed several capabilities for incorporating unique NASA research capabilities and observations with real-time weather forecasts. Planned utilization includes the development of a fully-cycled data assimilation system used to drive 36-48 hour forecasts produced by the NASA Unified version of the Weather Research and Forecasting (WRF) model (NU-WRF). The horsepower provided by the “Climate in a Box” system is expected to facilitate the assimilation of vertical profiles of temperature and moisture provided by the Atmospheric Infrared Sounder (AIRS) aboard the NASA Aqua satellite. In addition, the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard NASA’s Aqua and Terra satellites provide high-resolution sea surface temperatures and vegetation characteristics. The development of MODIS normalized difference vegetation index (NVDI) composites for use within the NASA Land Information System (LIS) will assist in the characterization of vegetation, and subsequently the surface albedo and processes related to soil moisture. Through application of satellite simulators, NASA satellite instruments can be used to examine forecast model errors in cloud cover and other characteristics. Through the aforementioned application of the “Climate in a Box” system and NU-WRF capabilities, an end goal is the establishment of a real-time forecast system that fully integrates modeling and analysis capabilities developed

  1. Projected Applications of a "Climate in a Box" Computing System at the NASA Short-Term Prediction Research and Transition (SPoRT) Center

    Science.gov (United States)

    Jedlovec, Gary J.; Molthan, Andrew L.; Zavodsky, Bradley; Case, Jonathan L.; LaFontaine, Frank J.

    2010-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center focuses on the transition of unique observations and research capabilities to the operational weather community, with a goal of improving short-term forecasts on a regional scale. Advances in research computing have lead to "Climate in a Box" systems, with hardware configurations capable of producing high resolution, near real-time weather forecasts, but with footprints, power, and cooling requirements that are comparable to desktop systems. The SPoRT Center has developed several capabilities for incorporating unique NASA research capabilities and observations with real-time weather forecasts. Planned utilization includes the development of a fully-cycled data assimilation system used to drive 36-48 hour forecasts produced by the NASA Unified version of the Weather Research and Forecasting (WRF) model (NU-WRF). The horsepower provided by the "Climate in a Box" system is expected to facilitate the assimilation of vertical profiles of temperature and moisture provided by the Atmospheric Infrared Sounder (AIRS) aboard the NASA Aqua satellite. In addition, the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard NASA s Aqua and Terra satellites provide high-resolution sea surface temperatures and vegetation characteristics. The development of MODIS normalized difference vegetation index (NVDI) composites for use within the NASA Land Information System (LIS) will assist in the characterization of vegetation, and subsequently the surface albedo and processes related to soil moisture. Through application of satellite simulators, NASA satellite instruments can be used to examine forecast model errors in cloud cover and other characteristics. Through the aforementioned application of the "Climate in a Box" system and NU-WRF capabilities, an end goal is the establishment of a real-time forecast system that fully integrates modeling and analysis capabilities developed within the NASA SPo

  2. Ambient radiation levels in positron emission tomography/computed tomography (PET/CT) imaging center

    Energy Technology Data Exchange (ETDEWEB)

    Santana, Priscila do Carmo; Oliveira, Paulo Marcio Campos de; Mamede, Marcelo; Silveira, Mariana de Castro; Aguiar, Polyanna; Real, Raphaela Vila, E-mail: pridili@gmail.com [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil); Silva, Teogenes Augusto da [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2015-01-15

    Objective: to evaluate the level of ambient radiation in a PET/CT center. Materials and methods: previously selected and calibrated TLD-100H thermoluminescent dosimeters were utilized to measure room radiation levels. During 32 days, the detectors were placed in several strategically selected points inside the PET/CT center and in adjacent buildings. After the exposure period the dosimeters were collected and processed to determine the radiation level. Results: in none of the points selected for measurements the values exceeded the radiation dose threshold for controlled area (5 mSv/ year) or free area (0.5 mSv/year) as recommended by the Brazilian regulations. Conclusion: in the present study the authors demonstrated that the whole shielding system is appropriate and, consequently, the workers are exposed to doses below the threshold established by Brazilian standards, provided the radiation protection standards are followed. (author)

  3. Computing for Lattice QCD: new developments from the APE experiment

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R [INFN, Sezione di Roma Tor Vergata, Roma (Italy); Biagioni, A; De Luca, S [INFN, Sezione di Roma, Roma (Italy)

    2008-06-15

    As the Lattice QCD develops improved techniques to shed light on new physics, it demands increasing computing power. The aim of the current APE (Array Processor Experiment) project is to provide the reference computing platform to the Lattice QCD community for the period 2009-2011. We present the project proposal for a peta flops range super-computing center with high performance and low maintenance costs, to be delivered starting from 2010.

  4. Computing for Lattice QCD: new developments from the APE experiment

    International Nuclear Information System (INIS)

    Ammendola, R.; Biagioni, A.; De Luca, S.

    2008-01-01

    As the Lattice QCD develops improved techniques to shed light on new physics, it demands increasing computing power. The aim of the current APE (Array Processor Experiment) project is to provide the reference computing platform to the Lattice QCD community for the period 2009-2011. We present the project proposal for a peta flops range super-computing center with high performance and low maintenance costs, to be delivered starting from 2010.

  5. Teaching introductory computer security at a Department of Defense university

    OpenAIRE

    Irvine, Cynthia E.

    1997-01-01

    The Naval Postgraduate School Center for Information Systems Security (INFOSEC) Studies and Research (NPS CISR) has developed an instructional program in computer security. Its objective is to insure that students not only understand practical aspects of computer security associated with current technology, but also learn the fundamental principles that can be applied to the development of systems for which high confidence in policy enforcement can be achieved. Introduction to Computer Sec...

  6. The Development of a Robot-Based Learning Companion: A User-Centered Design Approach

    Science.gov (United States)

    Hsieh, Yi-Zeng; Su, Mu-Chun; Chen, Sherry Y.; Chen, Gow-Dong

    2015-01-01

    A computer-vision-based method is widely employed to support the development of a variety of applications. In this vein, this study uses a computer-vision-based method to develop a playful learning system, which is a robot-based learning companion named RobotTell. Unlike existing playful learning systems, a user-centered design (UCD) approach is…

  7. Software-defined optical network for metro-scale geographically distributed data centers.

    Science.gov (United States)

    Samadi, Payman; Wen, Ke; Xu, Junjie; Bergman, Keren

    2016-05-30

    The emergence of cloud computing and big data has rapidly increased the deployment of small and mid-sized data centers. Enterprises and cloud providers require an agile network among these data centers to empower application reliability and flexible scalability. We present a software-defined inter data center network to enable on-demand scale out of data centers on a metro-scale optical network. The architecture consists of a combined space/wavelength switching platform and a Software-Defined Networking (SDN) control plane equipped with a wavelength and routing assignment module. It enables establishing transparent and bandwidth-selective connections from L2/L3 switches, on-demand. The architecture is evaluated in a testbed consisting of 3 data centers, 5-25 km apart. We successfully demonstrated end-to-end bulk data transfer and Virtual Machine (VM) migrations across data centers with less than 100 ms connection setup time and close to full link capacity utilization.

  8. Bringing Computational Thinking into the High School Science and Math Classroom

    Science.gov (United States)

    Trouille, Laura; Beheshti, E.; Horn, M.; Jona, K.; Kalogera, V.; Weintrop, D.; Wilensky, U.; University CT-STEM Project, Northwestern; University CenterTalent Development, Northwestern

    2013-01-01

    Computational thinking (for example, the thought processes involved in developing algorithmic solutions to problems that can then be automated for computation) has revolutionized the way we do science. The Next Generation Science Standards require that teachers support their students’ development of computational thinking and computational modeling skills. As a result, there is a very high demand among teachers for quality materials. Astronomy provides an abundance of opportunities to support student development of computational thinking skills. Our group has taken advantage of this to create a series of astronomy-based computational thinking lesson plans for use in typical physics, astronomy, and math high school classrooms. This project is funded by the NSF Computing Education for the 21st Century grant and is jointly led by Northwestern University’s Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA), the Computer Science department, the Learning Sciences department, and the Office of STEM Education Partnerships (OSEP). I will also briefly present the online ‘Astro Adventures’ courses for middle and high school students I have developed through NU’s Center for Talent Development. The online courses take advantage of many of the amazing online astronomy enrichment materials available to the public, including a range of hands-on activities and the ability to take images with the Global Telescope Network. The course culminates with an independent computational research project.

  9. Center for Space Transportation and Applied Research Fifth Annual Technical Symposium Proceedings

    Science.gov (United States)

    1993-01-01

    This Fifth Annual Technical Symposium, sponsored by the UT-Calspan Center for Space Transportation and Applied Research (CSTAR), is organized to provide an overview of the technical accomplishments of the Center's five Research and Technology focus areas during the past year. These areas include chemical propulsion, electric propulsion, commerical space transportation, computational methods, and laser materials processing. Papers in the area of artificial intelligence/expert systems are also presented.

  10. ASTEC: Controls analysis for personal computers

    Science.gov (United States)

    Downing, John P.; Bauer, Frank H.; Thorpe, Christopher J.

    1989-01-01

    The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. The project is a follow-on to the INCA (INteractive Controls Analysis) program that has been developed at GSFC over the past five years. While ASTEC makes use of the algorithms and expertise developed for the INCA program, the user interface was redesigned to take advantage of the capabilities of the personal computer. The design philosophy and the current capabilities of the ASTEC software are described.

  11. Monitoring and optimization of ATLAS Tier 2 center GoeGrid

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219638; Quadt, Arnulf; Yahyapour, Ramin

    The demand on computational and storage resources is growing along with the amount of information that needs to be processed and preserved. In order to ease the provisioning of the digital services to the growing number of consumers, more and more distributed computing systems and platforms are actively developed and employed. The building block of the distributed computing infrastructure are single computing centers, similar to the Worldwide LHC Computing Grid, Tier 2 centre GoeGrid. The main motivation of this thesis was the optimization of GoeGrid performance by efficient monitoring. The goal has been achieved by means of the GoeGrid monitoring information analysis. The data analysis approach was based on the adaptive-network-based fuzzy inference system (ANFIS) and machine learning algorithm such as Linear Support Vector Machine (SVM). The main object of the research was the digital service, since availability, reliability and serviceability of the computing platform can be measured according to the const...

  12. Quantum computing with defects

    Science.gov (United States)

    Varley, Joel

    2011-03-01

    The development of a quantum computer is contingent upon the identification and design of systems for use as qubits, the basic units of quantum information. One of the most promising candidates consists of a defect in diamond known as the nitrogen-vacancy (NV-1) center, since it is an individually-addressable quantum system that can be initialized, manipulated, and measured with high fidelity at room temperature. While the success of the NV-1 stems from its nature as a localized ``deep-center'' point defect, no systematic effort has been made to identify other defects that might behave in a similar way. We provide guidelines for identifying other defect centers with similar properties. We present a list of physical criteria that these centers and their hosts should meet and explain how these requirements can be used in conjunction with electronic structure theory to intelligently sort through candidate systems. To elucidate these points, we compare electronic structure calculations of the NV-1 center in diamond with those of several deep centers in 4H silicon carbide (SiC). Using hybrid functionals, we report formation energies, configuration-coordinate diagrams, and defect-level diagrams to compare and contrast the properties of these defects. We find that the NC VSi - 1 center in SiC, a structural analog of the NV-1 center in diamond, may be a suitable center with very different optical transition energies. We also discuss how the proposed criteria can be translated into guidelines to discover NV analogs in other tetrahedrally coordinated materials. This work was performed in collaboration with J. R. Weber, W. F. Koehl, B. B. Buckley, A. Janotti, C. G. Van de Walle, and D. D. Awschalom. This work was supported by ARO, AFOSR, and NSF.

  13. Best Practice Guidelines for Computer Technology in the Montessori Early Childhood Classroom.

    Science.gov (United States)

    Montminy, Peter

    1999-01-01

    Presents a draft for a principle-centered position statement of a Montessori early childhood program in central Pennsylvania, on the pros and cons of computer use in a Montessori 3-6 classroom. Includes computer software rating form. (Author/KB)

  14. A qualitative study adopting a user-centered approach to design and validate a brain computer interface for cognitive rehabilitation for people with brain injury.

    Science.gov (United States)

    Martin, Suzanne; Armstrong, Elaine; Thomson, Eileen; Vargiu, Eloisa; Solà, Marc; Dauwalder, Stefan; Miralles, Felip; Daly Lynn, Jean

    2017-07-14

    Cognitive rehabilitation is established as a core intervention within rehabilitation programs following a traumatic brain injury (TBI). Digitally enabled assistive technologies offer opportunities for clinicians to increase remote access to rehabilitation supporting transition into home. Brain Computer Interface (BCI) systems can harness the residual abilities of individuals with limited function to gain control over computers through their brain waves. This paper presents an online cognitive rehabilitation application developed with therapists, to work remotely with people who have TBI, who will use BCI at home to engage in the therapy. A qualitative research study was completed with people who are community dwellers post brain injury (end users), and a cohort of therapists involved in cognitive rehabilitation. A user-centered approach over three phases in the development, design and feasibility testing of this cognitive rehabilitation application included two tasks (Find-a-Category and a Memory Card task). The therapist could remotely prescribe activity with different levels of difficulty. The service user had a home interface which would present the therapy activities. This novel work was achieved by an international consortium of academics, business partners and service users.

  15. Ambient radiation levels in positron emission tomography/computed tomography (PET/CT) imaging center

    Science.gov (United States)

    Santana, Priscila do Carmo; de Oliveira, Paulo Marcio Campos; Mamede, Marcelo; Silveira, Mariana de Castro; Aguiar, Polyanna; Real, Raphaela Vila; da Silva, Teógenes Augusto

    2015-01-01

    Objective To evaluate the level of ambient radiation in a PET/CT center. Materials and Methods Previously selected and calibrated TLD-100H thermoluminescent dosimeters were utilized to measure room radiation levels. During 32 days, the detectors were placed in several strategically selected points inside the PET/CT center and in adjacent buildings. After the exposure period the dosimeters were collected and processed to determine the radiation level. Results In none of the points selected for measurements the values exceeded the radiation dose threshold for controlled area (5 mSv/year) or free area (0.5 mSv/year) as recommended by the Brazilian regulations. Conclusion In the present study the authors demonstrated that the whole shielding system is appropriate and, consequently, the workers are exposed to doses below the threshold established by Brazilian standards, provided the radiation protection standards are followed. PMID:25798004

  16. Scientific and technical information output of the Langley Research Center

    Science.gov (United States)

    1984-01-01

    Scientific and technical information that the Langley Research Center produced during the calendar year 1983 is compiled. Included are citations for Formal Reports, Quick-Release Technical Memorandums, Contractor Reports, Journal Articles and other Publications, Meeting Presentations, Technical Talks, Computer Programs, Tech Briefs, and Patents.

  17. Cloud computing and digital media fundamentals, techniques, and applications

    CERN Document Server

    Li, Kuan-Ching; Shih, Timothy K

    2014-01-01

    Cloud Computing and Digital Media: Fundamentals, Techniques, and Applications presents the fundamentals of cloud and media infrastructure, novel technologies that integrate digital media with cloud computing, and real-world applications that exemplify the potential of cloud computing for next-generation digital media. It brings together technologies for media/data communication, elastic media/data storage, security, authentication, cross-network media/data fusion, interdevice media interaction/reaction, data centers, PaaS, SaaS, and more.The book covers resource optimization for multimedia clo

  18. The Student/Library Computer Science Collaborative

    Science.gov (United States)

    Hahn, Jim

    2015-01-01

    With funding from an Institute of Museum and Library Services demonstration grant, librarians of the Undergraduate Library at the University of Illinois at Urbana-Champaign partnered with students in computer science courses to design and build student-centered mobile apps. The grant work called for demonstration of student collaboration…

  19. "Hack" Is Not A Dirty Word--The Tenth Anniversary of Patron Access Microcomputer Centers in Libraries.

    Science.gov (United States)

    Dewey, Patrick R.

    1986-01-01

    The history of patron access microcomputers in libraries is described as carrying on a tradition that information and computer power should be shared. Questions that all types of libraries need to ask in planning microcomputer centers are considered and several model centers are described. (EM)

  20. Applications of Computer Technology in Complex Craniofacial Reconstruction

    Directory of Open Access Journals (Sweden)

    Kristopher M. Day, MD

    2018-03-01

    Conclusion:. Modern 3D technology allows the surgeon to better analyze complex craniofacial deformities, precisely plan surgical correction with computer simulation of results, customize osteotomies, plan distractions, and print 3DPCI, as needed. The use of advanced 3D computer technology can be applied safely and potentially improve aesthetic and functional outcomes after complex craniofacial reconstruction. These techniques warrant further study and may be reproducible in various centers of care.

  1. Guidelines for development of NASA (National Aeronautics and Space Administration) computer security training programs

    Science.gov (United States)

    Tompkins, F. G.

    1983-01-01

    The report presents guidance for the NASA Computer Security Program Manager and the NASA Center Computer Security Officials as they develop training requirements and implement computer security training programs. NASA audiences are categorized based on the computer security knowledge required to accomplish identified job functions. Training requirements, in terms of training subject areas, are presented for both computer security program management personnel and computer resource providers and users. Sources of computer security training are identified.

  2. Aspects of computer control from the human engineering standpoint

    International Nuclear Information System (INIS)

    Huang, T.V.

    1979-03-01

    A Computer Control System includes data acquisition, information display and output control signals. In order to design such a system effectively we must first determine the required operational mode: automatic control (closed loop), computer assisted (open loop), or hybrid control. The choice of operating mode will depend on the nature of the plant, the complexity of the operation, the funds available, and the technical expertise of the operating staff, among many other factors. Once the mode has been selected, consideration must be given to the method (man/machine interface) by which the operator interacts with this system. The human engineering factors are of prime importance to achieving high operating efficiency and very careful attention must be given to this aspect of the work, if full operator acceptance is to be achieved. This paper will discuss these topics and will draw on experience gained in setting up the computer control system in Main Control Center for Stanford University's Accelerator Center (a high energy physics research facility)

  3. Modeling Road Traffic Using Service Center

    Directory of Open Access Journals (Sweden)

    HARAGOS, I.-M.

    2012-05-01

    Full Text Available Transport systems have an essential role in modern society because they facilitate access to natural resources and they stimulate trade. Current studies aimed at improving transport networks by developing new methods for optimization. Because of the increase in the global number of cars, one of the most common problems facing the transport network is congestion. By creating traffic models and simulate them, we can avoid this problem and find appropriate solutions. In this paper we propose a new method for modeling traffic. This method considers road intersections as being service centers. A service center represents a set consisting of a queue followed by one or multiple servers. This model was used to simulate real situations in an urban traffic area. Based on this simulation, we have successfully determined the optimal functioning and we have computed the performance measures.

  4. Clinical utility of dental cone-beam computed tomography: current perspectives

    OpenAIRE

    Jaju, Prashant P; Jaju, Sushma P

    2014-01-01

    Prashant P Jaju,1 Sushma P Jaju21Oral Medicine and Radiology, 2Conservative Dentistry and Endodontics, Rishiraj College of Dental Sciences and Research Center, Bhopal, IndiaAbstract: Panoramic radiography and computed tomography were the pillars of maxillofacial diagnosis. With the advent of cone-beam computed tomography, dental practice has seen a paradigm shift. This review article highlights the potential applications of cone-beam computed tomography in the fields of dental implantology an...

  5. ATLAS Distributed Computing: Experience and Evolution

    CERN Document Server

    Nairz, A; The ATLAS collaboration

    2013-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25 fb-1 of data. The total volume of beam and simulated data products exceeds 100 PB distributed across more than 150 computing centers around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics program including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2014 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, e...

  6. Interactive design center.

    Energy Technology Data Exchange (ETDEWEB)

    Pomplun, Alan R. (Sandia National Laboratories, Livermore, CA)

    2005-07-01

    Sandia's advanced computing resources provide researchers, engineers and analysts with the ability to develop and render highly detailed large-scale models and simulations. To take full advantage of these multi-million data point visualizations, display systems with comparable pixel counts are needed. The Interactive Design Center (IDC) is a second generation visualization theater designed to meet this need. The main display integrates twenty-seven projectors in a 9-wide by 3-high array with a total display resolution of more than 35 million pixels. Six individual SmartBoard displays offer interactive capabilities that include on-screen annotation and touch panel control of the facility's display systems. This report details the design, implementation and operation of this innovative facility.

  7. Computed tomography demonstration of a hypothalamic metastasis

    International Nuclear Information System (INIS)

    Chakeres, D.W.

    1983-01-01

    This case report describes a patient who presented with panhypopituitarism secondary to hypothalamic metastasis. A primary hypothalamic abnormality was suggested by computed tomographic (CT) demonstration of a small enhancing circular mass centered within the hypothalamus. Sellar radiographs and cerebral angiography were normal. (orig.)

  8. Computed tomography demonstration of a hypothalamic metastasis

    Energy Technology Data Exchange (ETDEWEB)

    Chakeres, D.W.

    1983-05-01

    This case report describes a patient who presented with panhypopituitarism secondary to hypothalamic metastasis. A primary hypothalamic abnormality was suggested by computed tomographic (CT) demonstration of a small enhancing circular mass centered within the hypothalamus. Sellar radiographs and cerebral angiography were normal.

  9. Physics of the 1 Teraflop RIKEN-BNL-Columbia QCD project. Proceedings of RIKEN BNL Research Center workshop: Volume 13

    International Nuclear Information System (INIS)

    1998-01-01

    A workshop was held at the RIKEN-BNL Research Center on October 16, 1998, as part of the first anniversary celebration for the center. This meeting brought together the physicists from RIKEN-BNL, BNL and Columbia who are using the QCDSP (Quantum Chromodynamics on Digital Signal Processors) computer at the RIKEN-BNL Research Center for studies of QCD. Many of the talks in the workshop were devoted to domain wall fermions, a discretization of the continuum description of fermions which preserves the global symmetries of the continuum, even at finite lattice spacing. This formulation has been the subject of analytic investigation for some time and has reached the stage where large-scale simulations in QCD seem very promising. With the computational power available from the QCDSP computers, scientists are looking forward to an exciting time for numerical simulations of QCD

  10. Automated Library of the Future: Estrella Mountain Community College Center.

    Science.gov (United States)

    Community & Junior College Libraries, 1991

    1991-01-01

    Describes plans for the Integrated High Technology Library (IHTL) at the Maricopa County Community College District's new Estrella Mountain campus, covering collaborative planning, the IHTL's design, and guidelines for the new center and campus (e.g., establishing computing/information-access across the curriculum; developing lifelong learners;…

  11. A User-Centered Cooperative Information System for Medical Imaging Diagnosis.

    Science.gov (United States)

    Gomez, Enrique J.; Quiles, Jose A.; Sanz, Marcos F.; del Pozo, Francisco

    1998-01-01

    Presents a cooperative information system for remote medical imaging diagnosis. General computer-supported cooperative work (CSCW) problems addressed are definition of a procedure for the design of user-centered cooperative systems (conceptual level); and improvement of user feedback and optimization of the communication bandwidth in highly…

  12. Computation for LHC experiments: a worldwide computing grid; Le calcul scientifique des experiences LHC: une grille de production mondiale

    Energy Technology Data Exchange (ETDEWEB)

    Fairouz, Malek [Universite Joseph-Fourier, LPSC, CNRS-IN2P3, Grenoble I, 38 (France)

    2010-08-15

    In normal operating conditions the LHC detectors are expected to record about 10{sup 10} collisions each year. The processing of all the consequent experimental data is a real computing challenge in terms of equipment, software and organization: it requires sustaining data flows of a few 10{sup 9} octets per second and recording capacity of a few tens of 10{sup 15} octets each year. In order to meet this challenge a computing network implying the dispatch and share of tasks, has been set. The W-LCG grid (World wide LHC computing grid) is made up of 4 tiers. Tiers 0 is the computer center in CERN, it is responsible for collecting and recording the raw data from the LHC detectors and to dispatch it to the 11 tiers 1. The tiers 1 is typically a national center, it is responsible for making a copy of the raw data and for processing it in order to recover relevant data with a physical meaning and to transfer the results to the 150 tiers 2. The tiers 2 is at the level of the Institute or laboratory, it is in charge of the final analysis of the data and of the production of the simulations. Tiers 3 are at the level of the laboratories, they provide a complementary and local resource to tiers 2 in terms of data analysis. (A.C.)

  13. 76 FR 50460 - Privacy Act of 1974; Notice of a Computer Matching Program

    Science.gov (United States)

    2011-08-15

    ... records will be disclosed for the purpose of this computer match are as follows: OPM will use the system... entitled to health care under TRS and TRR.'' E. Description of Computer Matching Program: Under the terms...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, Department of Defense (DoD...

  14. A User-Centered Mobile Cloud Computing Platform for Improving Knowledge Management in Small-to-Medium Enterprises in the Chilean Construction Industry

    Directory of Open Access Journals (Sweden)

    Daniela Núñez

    2018-03-01

    Full Text Available Knowledge management (KM is a key element for the development of small-to-medium enterprises (SMEs in the construction industry. This is particularly relevant in Chile, where this industry is composed almost entirely of SMEs. Although various KM system proposals can be found in the literature, they are not suitable for SMEs, due to usability problems, budget constraints, and time and connectivity issues. Mobile Cloud Computing (MCC systems offer several advantages to construction SMEs, but they have not yet been exploited to address KM needs. Therefore, this research is aimed at the development of a MCC-based KM platform to manage lessons learned in different construction projects of SMEs, through an iterative and user-centered methodology. Usability and quality evaluations of the proposed platform show that MCC is a feasible and attractive option to address the KM issues in SMEs of the Chilean construction industry, since it is possible to consider both technical and usability requirements.

  15. Computer classes and games in virtual reality environment to reduce loneliness among students of an elderly reference center: Study protocol for a randomised cross-over design.

    Science.gov (United States)

    Antunes, Thaiany Pedrozo Campos; Oliveira, Acary Souza Bulle de; Crocetta, Tania Brusque; Antão, Jennifer Yohanna Ferreira de Lima; Barbosa, Renata Thais de Almeida; Guarnieri, Regiani; Massetti, Thais; Monteiro, Carlos Bandeira de Mello; Abreu, Luiz Carlos de

    2017-03-01

    Physical and mental changes associated with aging commonly lead to a decrease in communication capacity, reducing social interactions and increasing loneliness. Computer classes for older adults make significant contributions to social and cognitive aspects of aging. Games in a virtual reality (VR) environment stimulate the practice of communicative and cognitive skills and might also bring benefits to older adults. Furthermore, it might help to initiate their contact to the modern technology. The purpose of this study protocol is to evaluate the effects of practicing VR games during computer classes on the level of loneliness of students of an elderly reference center. This study will be a prospective longitudinal study with a randomised cross-over design, with subjects aged 50 years and older, of both genders, spontaneously enrolled in computer classes for beginners. Data collection will be done in 3 moments: moment 0 (T0) - at baseline; moment 1 (T1) - after 8 typical computer classes; and moment 2 (T2) - after 8 computer classes which include 15 minutes for practicing games in VR environment. A characterization questionnaire, the short version of the Short Social and Emotional Loneliness Scale for Adults (SELSA-S) and 3 games with VR (Random, MoviLetrando, and Reaction Time) will be used. For the intervention phase 4 other games will be used: Coincident Timing, Motor Skill Analyser, Labyrinth, and Fitts. The statistical analysis will compare the evolution in loneliness perception, performance, and reaction time during the practice of the games between the 3 moments of data collection. Performance and reaction time during the practice of the games will also be correlated to the loneliness perception. The protocol is approved by the host institution's ethics committee under the number 52305215.3.0000.0082. Results will be disseminated via peer-reviewed journal articles and conferences. This clinical trial is registered at ClinicalTrials.gov identifier: NCT

  16. The Impact of Wireless Technology on Order Selection Audits at an Auto Parts Distribution Center

    Science.gov (United States)

    Goomas, David T.

    2012-01-01

    Audits of store order pallets or totes performed by auditors at five distribution centers (two experimental and three comparison distribution centers) were used to check for picking accuracy prior to being loaded onto a truck for store delivery. Replacing the paper audits with wireless handheld computers that included immediate auditory and visual…

  17. Simulating Shopper Behavior using Fuzzy Logic in Shopping Center Simulation

    Directory of Open Access Journals (Sweden)

    Jason Christian

    2016-12-01

    Full Text Available To simulate real-world phenomena, a computer tool can be used to run a simulation and provide a detailed report. By using a computer-aided simulation tool, we can retrieve information relevant to the simulated subject in a relatively short time. This study is an extended and complete version of an initial research done by Christian and Hansun and presents a prototype of a multi-agent shopping center simulation tool along with a fuzzy logic algorithm implemented in the system. Shopping centers and all their components are represented in a simulated 3D environment. The simulation tool was created using the Unity3D engine to build the 3D environment and to run the simulation. To model and simulate the behavior of agents inside the simulation, a fuzzy logic algorithm that uses the agents’ basic knowledge as input was built to determine the agents’ behavior inside the system and to simulate human behaviors as realistically as possible.

  18. Computer network prepared to handle massive data flow

    CERN Multimedia

    2006-01-01

    "Massive quantities of data will soon begin flowing from the largest scientific instrument ever built into an internationl network of computer centers, including one operated jointly by the University of Chicago and Indiana University." (2 pages)

  19. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Peisert, Sean [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Davis, CA (United States); Potok, Thomas E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jones, Todd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-03

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues included research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the

  20. Random migration processes between two stochastic epidemic centers.

    Science.gov (United States)

    Sazonov, Igor; Kelbert, Mark; Gravenor, Michael B

    2016-04-01

    We consider the epidemic dynamics in stochastic interacting population centers coupled by random migration. Both the epidemic and the migration processes are modeled by Markov chains. We derive explicit formulae for the probability distribution of the migration process, and explore the dependence of outbreak patterns on initial parameters, population sizes and coupling parameters, using analytical and numerical methods. We show the importance of considering the movement of resident and visitor individuals separately. The mean field approximation for a general migration process is derived and an approximate method that allows the computation of statistical moments for networks with highly populated centers is proposed and tested numerically. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Lessons learned: mobile device encryption in the academic medical center.

    Science.gov (United States)

    Kusche, Kristopher P

    2009-01-01

    The academic medical center is faced with the unique challenge of meeting the multi-faceted needs of both a modern healthcare organization and an academic institution, The need for security to protect patient information must be balanced by the academic freedoms expected in the college setting. The Albany Medical Center, consisting of the Albany Medical College and the Albany Medical Center Hospital, was challenged with implementing a solution that would preserve the availability, integrity and confidentiality of business, patient and research data stored on mobile devices. To solve this problem, Albany Medical Center implemented a mobile encryption suite across the enterprise. Such an implementation comes with complexities, from performance across multiple generations of computers and operating systems, to diversity of application use mode and end user adoption, all of which requires thoughtful policy and standards creation, understanding of regulations, and a willingness and ability to work through such diverse needs.

  2. A Dynamic and Interactive Monitoring System of Data Center Resources

    Directory of Open Access Journals (Sweden)

    Yu Ling-Fei

    2016-01-01

    Full Text Available To maximize the utilization and effectiveness of resources, it is very necessary to have a well suited management system for modern data centers. Traditional approaches to resource provisioning and service requests have proven to be ill suited for virtualization and cloud computing. The manual handoffs between technology teams were also highly inefficient and poorly documented. In this paper, a dynamic and interactive monitoring system for data center resources, ResourceView, is presented. By consolidating all data center management functionality into a single interface, ResourceView shares a common view of the timeline metric status, while providing comprehensive, centralized monitoring of data center physical and virtual IT assets including power, cooling, physical space and VMs, so that to improve availability and efficiency. In addition, servers and VMs can be monitored from several viewpoints such as clusters, racks and projects, which is very convenient for users.

  3. Synergies and Distinctions between Computational Disciplines in Biomedical Research: Perspective from the Clinical and Translational Science Award Programs

    Science.gov (United States)

    Bernstam, Elmer V.; Hersh, William R.; Johnson, Stephen B.; Chute, Christopher G.; Nguyen, Hien; Sim, Ida; Nahm, Meredith; Weiner, Mark; Miller, Perry; DiLaura, Robert P.; Overcash, Marc; Lehmann, Harold P.; Eichmann, David; Athey, Brian D.; Scheuermann, Richard H.; Anderson, Nick; Starren, Justin B.; Harris, Paul A.; Smith, Jack W.; Barbour, Ed; Silverstein, Jonathan C.; Krusch, David A.; Nagarajan, Rakesh; Becich, Michael J.

    2010-01-01

    Clinical and translational research increasingly requires computation. Projects may involve multiple computationally-oriented groups including information technology (IT) professionals, computer scientists and biomedical informaticians. However, many biomedical researchers are not aware of the distinctions among these complementary groups, leading to confusion, delays and sub-optimal results. Although written from the perspective of clinical and translational science award (CTSA) programs within academic medical centers, the paper addresses issues that extend beyond clinical and translational research. The authors describe the complementary but distinct roles of operational IT, research IT, computer science and biomedical informatics using a clinical data warehouse as a running example. In general, IT professionals focus on technology. The authors distinguish between two types of IT groups within academic medical centers: central or administrative IT (supporting the administrative computing needs of large organizations) and research IT (supporting the computing needs of researchers). Computer scientists focus on general issues of computation such as designing faster computers or more efficient algorithms, rather than specific applications. In contrast, informaticians are concerned with data, information and knowledge. Biomedical informaticians draw on a variety of tools, including but not limited to computers, to solve information problems in health care and biomedicine. The paper concludes with recommendations regarding administrative structures that can help to maximize the benefit of computation to biomedical research within academic health centers. PMID:19550198

  4. Stern-Center Potsdam; Stern-Center Potsdam

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1999-07-01

    The ''Stern-Center Potsdam'' is a star-shaped shopping mall in the city center. [German] Das Stern-Center in Potsdam bietet als Einkaufszentrum vor den Toren Berlins Platz fuer eine Vielzahl von Geschaeften. Die sternfoermige Gebaeudestruktur des Centers bildet den Mittelpunkt des Stadtviertels 'Am Stern'. (orig.)

  5. Requirements for SSC central computing staffing (conceptual)

    International Nuclear Information System (INIS)

    Pfister, J.

    1985-01-01

    Given a computation center with --10,000 MIPS supporting --1,000 users, what are the staffing requirements? The attempt in this paper is to list the functions and staff size required in a central computing or centrally supported computing complex. The organization assumes that although considerable computing power would exist (mostly for online) in the four interaction regions (IR) that there are functions/capabilities better performed outside the IR and in this model at a ''central computing facility.'' What follows is one staffing approach, not necessarily optimal, with certain assumptions about numbers of computer systems, media, networks and system controls, that is, one would get the best technology available. Thus, it is speculation about what the technology may bring and what it takes to operate it. From an end user support standpoint it is less clear, given the geography of an SSC, where and what the consulting support should look like and its location

  6. Development and organization of scientific methodology and information databases for nuclear technology calculations

    International Nuclear Information System (INIS)

    Gritzay, O.; Kalchenko, O.

    2010-01-01

    Full text: Scientific support of NPPs has to cover several important aspects of scientific and organization activity, namely:1.Training for group of high skilled specialists to do the following work: o nuclear data generation for engineer calculations; o engineer calculations to ensure the safety operation of NPPs; o experimental-calculation support of fluence dosimetry at NPP. 2.Development of up-to-date computer base, equipped with necessary program packages for nuclear data generation and engineer calculations. 3.The updated Libraries of Evaluated Nuclear Data (ENDF), such as ENDF/B-VII (USA), JENDL-3.3 (Japan) and JEFF-3.1 (Europe), RUSFOND ( Russia) and as a result the generation of specialized nuclear data multi-group libraries for special purpose engineer calculations.To reach these purposes, the Ukrainian Nuclear Data Center (UKRNDC) was organized and developed for more, than 10 years (since 1996).The capabilities of the UKRNDC are detailed below. o Modern ENDF libraries, first of all the general purpose libraries, such as ENDF/B-7.0, -6.8, JEFF-3.1.1, JENDL-3.3, etc. These databases contain recommended, evaluated cross sections, spectra, angular distributions, fission product yields, photo-atomic and thermal scattering law data, with emphasis on neutron induced reactions.o Codes for processing these data, updated to the last versions of ENDF and other libraries. First of all these are PREPRO 2007 package (Updated March 17, 2007) and NJOY package updated to versions NJOY-158 and NJOY-253 (in 2009). These codes may give the possibilities to produce the multi-group data for needed spectrum of interacting particles (neutrons, protons, gammas) and temperatures.o Computer base of several specialized server stations, such as ESCALA- S120 (analogous to IBM -240 with RISC 6000 processor) operating under OS under OS UNIX (version AIX 5.1) and IBM PC operating under Linux Red Hat 7.2.o The set of PC computers joined in UKRNDC network, operating mainly in OS Windows

  7. 8th Workshop on Computational Optimization

    CERN Document Server

    2016-01-01

    This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization 2015. It presents recent advances in computational optimization. The volume includes important real life problems like parameter settings for controlling processes in bioreactor, control of ethanol production, minimal convex hill with application in routing algorithms, graph coloring, flow design in photonic data transport system, predicting indoor temperature, crisis control center monitoring, fuel consumption of helicopters, portfolio selection, GPS surveying and so on. It shows how to develop algorithms for them based on new metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming and others. This research demonstrates how some real-world problems arising in engineering, economics, medicine and other domains can be formulated as optimization problems. .

  8. A 3-Month Randomized Controlled Pilot Trial of a Patient-Centered, Computer-Based Self-Monitoring System for the Care of Type 2 Diabetes Mellitus and Hypertension.

    Science.gov (United States)

    Or, Calvin; Tao, Da

    2016-04-01

    This study was performed to evaluate the effects of a patient-centered, tablet computer-based self-monitoring system for chronic disease care. A 3-month randomized controlled pilot trial was conducted to compare the use of a computer-based self-monitoring system in disease self-care (intervention group; n = 33) with a conventional self-monitoring method (control group; n = 30) in patients with type 2 diabetes mellitus and/or hypertension. The system was equipped with a 2-in-1 blood glucose and blood pressure monitor, a reminder feature, and video-based educational materials for the care of the two chronic diseases. The control patients were given only the 2-in-1 monitor for self-monitoring. The outcomes reported here included the glycated hemoglobin (HbA1c) level, fasting blood glucose level, systolic blood pressure, diastolic blood pressure, chronic disease knowledge, and frequency of self-monitoring. The data were collected at baseline and at 1-, 2-, and 3-month follow-up visits. The patients in the intervention group had a significant decrease in mean systolic blood pressure from baseline to 1 month (p computer-assisted and conventional disease self-monitoring appear to be useful to support/maintain blood pressure and diabetes control. The beneficial effects of the use of electronic self-care resources and support provided via mobile technologies require further confirmation in longer-term, larger trials.

  9. Modeling subsurface reactive flows using leadership-class computing

    Energy Technology Data Exchange (ETDEWEB)

    Mills, Richard Tran [Computational Earth Sciences Group, Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831-6015 (United States); Hammond, Glenn E [Hydrology Group, Environmental Technology Division, Pacific Northwest National Laboratory, Richland, WA 99352 (United States); Lichtner, Peter C [Hydrology, Geochemistry, and Geology Group, Earth and Environmental Sciences Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Sripathi, Vamsi [Department of Computer Science, North Carolina State University, Raleigh, NC 27695-8206 (United States); Mahinthakumar, G [Department of Civil, Construction, and Environmental Engineering, North Carolina State University, Raleigh, NC 27695-7908 (United States); Smith, Barry F, E-mail: rmills@ornl.go, E-mail: glenn.hammond@pnl.go, E-mail: lichtner@lanl.go, E-mail: vamsi_s@ncsu.ed, E-mail: gmkumar@ncsu.ed, E-mail: bsmith@mcs.anl.go [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439-4844 (United States)

    2009-07-01

    We describe our experiences running PFLOTRAN-a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media- on leadership-class supercomputers, including initial experiences running on the petaflop incarnation of Jaguar, the Cray XT5 at the National Center for Computational Sciences at Oak Ridge National Laboratory. PFLOTRAN utilizes fully implicit time-stepping and is built on top of the Portable, Extensible Toolkit for Scientific Computation (PETSc). We discuss some of the hurdles to 'at scale' performance with PFLOTRAN and the progress we have made in overcoming them on leadership-class computer architectures.

  10. Modeling subsurface reactive flows using leadership-class computing

    International Nuclear Information System (INIS)

    Mills, Richard Tran; Hammond, Glenn E; Lichtner, Peter C; Sripathi, Vamsi; Mahinthakumar, G; Smith, Barry F

    2009-01-01

    We describe our experiences running PFLOTRAN-a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media- on leadership-class supercomputers, including initial experiences running on the petaflop incarnation of Jaguar, the Cray XT5 at the National Center for Computational Sciences at Oak Ridge National Laboratory. PFLOTRAN utilizes fully implicit time-stepping and is built on top of the Portable, Extensible Toolkit for Scientific Computation (PETSc). We discuss some of the hurdles to 'at scale' performance with PFLOTRAN and the progress we have made in overcoming them on leadership-class computer architectures.

  11. A multigrid algorithm for the cell-centered finite difference scheme

    Science.gov (United States)

    Ewing, Richard E.; Shen, Jian

    1993-01-01

    In this article, we discuss a non-variational V-cycle multigrid algorithm based on the cell-centered finite difference scheme for solving a second-order elliptic problem with discontinuous coefficients. Due to the poor approximation property of piecewise constant spaces and the non-variational nature of our scheme, one step of symmetric linear smoothing in our V-cycle multigrid scheme may fail to be a contraction. Again, because of the simple structure of the piecewise constant spaces, prolongation and restriction are trivial; we save significant computation time with very promising computational results.

  12. University of Tennessee Center for Space Transportation and Applied Research (CSTAR)

    Science.gov (United States)

    1995-10-01

    The Center for Space Transportation and Applied Research had projects with space applications in six major areas: laser materials processing, artificial intelligence/expert systems, space transportation, computational methods, chemical propulsion, and electric propulsion. The closeout status of all these projects is addressed.

  13. University of Tennessee Center for Space Transportation and Applied Research (CSTAR)

    Science.gov (United States)

    1995-01-01

    The Center for Space Transportation and Applied Research had projects with space applications in six major areas: laser materials processing, artificial intelligence/expert systems, space transportation, computational methods, chemical propulsion, and electric propulsion. The closeout status of all these projects is addressed.

  14. Efficient management of data center resources for massively multiplayer online games

    NARCIS (Netherlands)

    Nae, V.; Iosup, A.; Podlipnig, S.; Prodan, R.; Epema, D.H.J.; Fahringer, T.

    2008-01-01

    Today's massively multiplayer online games (MMOGs) can include millions of concurrent players spread across the world. To keep these highly-interactive virtual environments online, a MMOG operator may need to provision tens of thousands of computing resources from various data centers. Faced with

  15. Nuclear safety research collaborations between the U.S. and Russian Federation International Nuclear Safety Centers

    International Nuclear Information System (INIS)

    Hill, D. J.; Braun, J. C.; Klickman, A. E.; Bougaenko, S. E.; Kabonov, L. P.; Kraev, A. G.

    2000-01-01

    The Russian Federation Ministry for Atomic Energy (MINATOM) and the US Department of Energy (USDOE) have formed International Nuclear Safety Centers to collaborate on nuclear safety research. USDOE established the US Center (ISINSC) at Argonne National Laboratory (ANL) in October 1995. MINATOM established the Russian Center (RINSC) at the Research and Development Institute of Power Engineering (RDIPE) in Moscow in July 1996. In April 1998 the Russian center became a semi-independent, autonomous organization under MINATOM. The goals of the center are to: Cooperate in the development of technologies associated with nuclear safety in nuclear power engineering; Be international centers for the collection of information important for safety and technical improvements in nuclear power engineering; and Maintain a base for fundamental knowledge needed to design nuclear reactors. The strategic approach is being used to accomplish these goals is for the two centers to work together to use the resources and the talents of the scientists associated with the US Center and the Russian Center to do collaborative research to improve the safety of Russian-designed nuclear reactors. The two centers started conducting joint research and development projects in January 1997. Since that time the following ten joint projects have been initiated: INSC databases--web server and computing center; Coupled codes--Neutronic and thermal-hydraulic; Severe accident management for Soviet-designed reactors; Transient management and advanced control; Survey of relevant nuclear safety research facilities in the Russian Federation; Computer code validation for transient analysis of VVER and RBMK reactors; Advanced structural analysis; Development of a nuclear safety research and development plan for MINATOM; Properties and applications of heavy liquid metal coolants; and Material properties measurement and assessment. Currently, there is activity in eight of these projects. Details on each of these

  16. High throughput computing: a solution for scientific analysis

    Science.gov (United States)

    O'Donnell, M.

    2011-01-01

    Public land management agencies continually face resource management problems that are exacerbated by climate warming, land-use change, and other human activities. As the U.S. Geological Survey (USGS) Fort Collins Science Center (FORT) works with managers in U.S. Department of the Interior (DOI) agencies and other federal, state, and private entities, researchers are finding that the science needed to address these complex ecological questions across time and space produces substantial amounts of data. The additional data and the volume of computations needed to analyze it require expanded computing resources well beyond single- or even multiple-computer workstations. To meet this need for greater computational capacity, FORT investigated how to resolve the many computational shortfalls previously encountered when analyzing data for such projects. Our objectives included finding a solution that would:

  17. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  18. System security in the space flight operations center

    Science.gov (United States)

    Wagner, David A.

    1988-01-01

    The Space Flight Operations Center is a networked system of workstation-class computers that will provide ground support for NASA's next generation of deep-space missions. The author recounts the development of the SFOC system security policy and discusses the various management and technology issues involved. Particular attention is given to risk assessment, security plan development, security implications of design requirements, automatic safeguards, and procedural safeguards.

  19. Computational strategies for three-dimensional flow simulations on distributed computer systems

    Science.gov (United States)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-08-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  20. Computational strategies for three-dimensional flow simulations on distributed computer systems

    Science.gov (United States)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-01-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  1. Education through the prism of computation

    Science.gov (United States)

    Kaurov, Vitaliy

    2014-03-01

    With the rapid development of technology, computation claims its irrevocable place among research components of modern science. Thus to foster a successful future scientist, engineer or educator we need to add computation to the foundations of scientific education. We will discuss what type of paradigm shifts it brings to these foundations on the example of Wolfram Science Summer School. It is one of the most advanced computational outreach programs run by Wolfram Foundation, welcoming participants of almost all ages and backgrounds. Centered on complexity science and physics, it also covers numerous adjacent and interdisciplinary fields such as finance, biology, medicine and even music. We will talk about educational and research experiences in this program during the 12 years of its existence. We will review statistics and outputs the program has produced. Among these are interactive electronic publications at the Wolfram Demonstrations Project and contributions to the computational knowledge engine Wolfram|Alpa.

  2. International Symposium on Computing and Network Sustainability

    CERN Document Server

    Akashe, Shyam

    2017-01-01

    The book is compilation of technical papers presented at International Research Symposium on Computing and Network Sustainability (IRSCNS 2016) held in Goa, India on 1st and 2nd July 2016. The areas covered in the book are sustainable computing and security, sustainable systems and technologies, sustainable methodologies and applications, sustainable networks applications and solutions, user-centered services and systems and mobile data management. The novel and recent technologies presented in the book are going to be helpful for researchers and industries in their advanced works.

  3. Physics, Computer Science and Mathematics Division annual report, January 1--December 31, 1976

    International Nuclear Information System (INIS)

    Lepore, J.V.

    1977-01-01

    This annual report of the Physics, Computer Science and Mathematics Division describes the scientific research and other work carried out within the Division during the calendar year 1976. The Division is concerned with work in experimental and theoretical physics, with computer science and applied mathematics, and with the operation of a computer center. The major physics research activity is in high-energy physics; a vigorous program is maintained in this pioneering field. The high-energy physics research program in the Division now focuses on experiments with e + e - colliding beams using advanced techniques and developments initiated and perfected at the Laboratory. The Division continues its work in medium energy physics, with experimental work carried out at the Bevatron and at the Los Alamos Pi-Meson Facility. Work in computer science and applied mathematics includes construction of data bases, computer graphics, computational physics and data analysis, mathematical modeling, and mathematical analysis of differential and integral equations resulting from physical problems. The computer center serves the Laboratory by constantly upgrading its facility and by providing day-to-day service. This report is descriptive in nature; references to detailed publications are given

  4. Physics, Computer Science and Mathematics Division annual report, January 1--December 31, 1976

    Energy Technology Data Exchange (ETDEWEB)

    Lepore, J.V. (ed.)

    1977-01-01

    This annual report of the Physics, Computer Science and Mathematics Division describes the scientific research and other work carried out within the Division during the calendar year 1976. The Division is concerned with work in experimental and theoretical physics, with computer science and applied mathematics, and with the operation of a computer center. The major physics research activity is in high-energy physics; a vigorous program is maintained in this pioneering field. The high-energy physics research program in the Division now focuses on experiments with e/sup +/e/sup -/ colliding beams using advanced techniques and developments initiated and perfected at the Laboratory. The Division continues its work in medium energy physics, with experimental work carried out at the Bevatron and at the Los Alamos Pi-Meson Facility. Work in computer science and applied mathematics includes construction of data bases, computer graphics, computational physics and data analysis, mathematical modeling, and mathematical analysis of differential and integral equations resulting from physical problems. The computer center serves the Laboratory by constantly upgrading its facility and by providing day-to-day service. This report is descriptive in nature; references to detailed publications are given. (RWR)

  5. Reliability in Warehouse-Scale Computing: Why Low Latency Matters

    DEFF Research Database (Denmark)

    Nannarelli, Alberto

    2015-01-01

    , the limiting factor of these warehouse-scale data centers is the power dissipation. Power is dissipated not only in the computation itself, but also in heat removal (fans, air conditioning, etc.) to keep the temperature of the devices within the operating ranges. The need to keep the temperature low within......Warehouse sized buildings are nowadays hosting several types of large computing systems: from supercomputers to large clusters of servers to provide the infrastructure to the cloud. Although the main target, especially for high-performance computing, is still to achieve high throughput...

  6. PRODEEDINGS OF RIKEN BNL RESEARCH CENTER WORKSHOP : HIGH PERFORMANCE COMPUTING WITH QCDOC AND BLUEGENE.

    Energy Technology Data Exchange (ETDEWEB)

    CHRIST,N.; DAVENPORT,J.; DENG,Y.; GARA,A.; GLIMM,J.; MAWHINNEY,R.; MCFADDEN,E.; PESKIN,A.; PULLEYBLANK,W.

    2003-03-11

    Staff of Brookhaven National Laboratory, Columbia University, IBM and the RIKEN BNL Research Center organized a one-day workshop held on February 28, 2003 at Brookhaven to promote the following goals: (1) To explore areas other than QCD applications where the QCDOC and BlueGene/L machines can be applied to good advantage, (2) To identify areas where collaboration among the sponsoring institutions can be fruitful, and (3) To expose scientists to the emerging software architecture. This workshop grew out of an informal visit last fall by BNL staff to the IBM Thomas J. Watson Research Center that resulted in a continuing dialog among participants on issues common to these two related supercomputers. The workshop was divided into three sessions, addressing the hardware and software status of each system, prospective applications, and future directions.

  7. Voltage profile program for the Kennedy Space Center electric power distribution system

    Science.gov (United States)

    1976-01-01

    The Kennedy Space Center voltage profile program computes voltages at all busses greater than 1 Kv in the network under various conditions of load. The computation is based upon power flow principles and utilizes a Newton-Raphson iterative load flow algorithm. Power flow conditions throughout the network are also provided. The computer program is designed for both steady state and transient operation. In the steady state mode, automatic tap changing of primary distribution transformers is incorporated. Under transient conditions, such as motor starts etc., it is assumed that tap changing is not accomplished so that transformer secondary voltage is allowed to sag.

  8. Investigating Impact Metrics for Performance for the US EPA National Center for Computational Toxicology (ACS Fall meeting)

    Science.gov (United States)

    The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data drive...

  9. Abstracts of computer programs and data libraries pertaining to photon production data

    Energy Technology Data Exchange (ETDEWEB)

    White, J.E.; Manneschmidt, J.B.; Finch, S.Y.; Dickens, J.K.

    1998-06-01

    Abstracts, or descriptions, of computer programs and data libraries pertaining to Photon Production Data (Measurements, Evaluations and Calculations) maintained in the collections of the Radiation Safety Information Computational Center, Oak Ridge, Tennessee USA and at the OECD/NEA Data Bank, Paris, are collected in this document.

  10. Abstracts of computer programs and data libraries pertaining to photon production data

    International Nuclear Information System (INIS)

    White, J.E.; Manneschmidt, J.B.; Finch, S.Y.; Dickens, J.K.

    1998-01-01

    Abstracts, or descriptions, of computer programs and data libraries pertaining to Photon Production Data (Measurements, Evaluations and Calculations) maintained in the collections of the Radiation Safety Information Computational Center, Oak Ridge, Tennessee USA and at the OECD/NEA Data Bank, Paris, are collected in this document

  11. Predicting Structures of Ru-Centered Dyes: A Computational Screening Tool.

    Science.gov (United States)

    Fredin, Lisa A; Allison, Thomas C

    2016-04-07

    Dye-sensitized solar cells (DSCs) represent a means for harvesting solar energy to produce electrical power. Though a number of light harvesting dyes are in use, the search continues for more efficient and effective compounds to make commercially viable DSCs a reality. Computational methods have been increasingly applied to understand the dyes currently in use and to aid in the search for improved light harvesting compounds. Semiempirical quantum chemistry methods have a well-deserved reputation for giving good quality results in a very short amount of computer time. The most recent semiempirical models such as PM6 and PM7 are parametrized for a wide variety of molecule types, including organometallic complexes similar to DSC chromophores. In this article, the performance of PM6 is tested against a set of 20 molecules whose geometries were optimized using a density functional theory (DFT) method. It is found that PM6 gives geometries that are in good agreement with the optimized DFT structures. In order to reduce the differences between geometries optimized using PM6 and geometries optimized using DFT, the PM6 basis set parameters have been optimized for a subset of the molecules. It is found that it is sufficient to optimize the basis set for Ru alone to improve the agreement between the PM6 results and the DFT results. When this optimized Ru basis set is used, the mean unsigned error in Ru-ligand bond lengths is reduced from 0.043 to 0.017 Å in the set of 20 test molecules. Though the magnitude of these differences is small, the effect on the calculated UV/vis spectra is significant. These results clearly demonstrate the value of using PM6 to screen DSC chromophores as well as the value of optimizing PM6 basis set parameters for a specific set of molecules.

  12. Quantum computing with defects in diamond

    International Nuclear Information System (INIS)

    Jelezko, F.; Gaebel, T.; Popa, I.; Domhan, M.; Wittmann, C.; Wrachtrup, J.

    2005-01-01

    Full text: Single spins in semiconductors, in particular associated with defect centers, are promising candidates for practical and scalable implementation of quantum computing even at room temperature. Such an implementation may also use the reliable and well known gate constructions from bulk nuclear magnetic resonance (NMR) quantum computing. Progress in development of quantum processor based on defects in diamond will be discussed. By combining optical microscopy, and magnetic resonance techniques, the first quantum logical operations on single spins in a solid are now demonstrated. The system is perspective for room temperature operation because of a weak dependence of decoherence on temperature (author)

  13. Examination of concept of next generation computer. Progress report 1999

    Energy Technology Data Exchange (ETDEWEB)

    Higuchi, Kenji; Hasegawa, Yukihiro; Hirayama, Toshio

    2000-12-01

    The Center for Promotion of Computational Science and Engineering has conducted R and D works on the technology of parallel processing and has started the examination of the next generation computer in 1999. This report describes the behavior analyses of quantum calculation codes. It also describes the consideration for the analyses and examination results for the method to reduce cash misses. Furthermore, it describes a performance simulator that is being developed to quantitatively examine the concept of the next generation computer. (author)

  14. NASA Center for Climate Simulation (NCCS) Advanced Technology AT5 Virtualized Infiniband Report

    Science.gov (United States)

    Thompson, John H.; Bledsoe, Benjamin C.; Wagner, Mark; Shakshober, John; Fromkin, Russ

    2013-01-01

    The NCCS is part of the Computational and Information Sciences and Technology Office (CISTO) of Goddard Space Flight Center's (GSFC) Sciences and Exploration Directorate. The NCCS's mission is to enable scientists to increase their understanding of the Earth, the solar system, and the universe by supplying state-of-the-art high performance computing (HPC) solutions. To accomplish this mission, the NCCS (https://www.nccs.nasa.gov) provides high performance compute engines, mass storage, and network solutions to meet the specialized needs of the Earth and space science user communities

  15. Scientific visualization in computational aerodynamics at NASA Ames Research Center

    Science.gov (United States)

    Bancroft, Gordon V.; Plessel, Todd; Merritt, Fergus; Walatka, Pamela P.; Watson, Val

    1989-01-01

    The visualization methods used in computational fluid dynamics research at the NASA-Ames Numerical Aerodynamic Simulation facility are examined, including postprocessing, tracking, and steering methods. The visualization requirements of the facility's three-dimensional graphical workstation are outlined and the types hardware and software used to meet these requirements are discussed. The main features of the facility's current and next-generation workstations are listed. Emphasis is given to postprocessing techniques, such as dynamic interactive viewing on the workstation and recording and playback on videodisk, tape, and 16-mm film. Postprocessing software packages are described, including a three-dimensional plotter, a surface modeler, a graphical animation system, a flow analysis software toolkit, and a real-time interactive particle-tracer.

  16. Distributed computing grid experiences in CMS

    CERN Document Server

    Andreeva, Julia; Barrass, T; Bonacorsi, D; Bunn, Julian; Capiluppi, P; Corvo, M; Darmenov, N; De Filippis, N; Donno, F; Donvito, G; Eulisse, G; Fanfani, A; Fanzago, F; Filine, A; Grandi, C; Hernández, J M; Innocente, V; Jan, A; Lacaprara, S; Legrand, I; Metson, S; Newbold, D; Newman, H; Pierro, A; Silvestris, L; Steenberg, C; Stockinger, H; Taylor, Lucas; Thomas, M; Tuura, L; Van Lingen, F; Wildish, Tony

    2005-01-01

    The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data- taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure ...

  17. Multiple single-centered attractors

    International Nuclear Information System (INIS)

    Dominic, Pramod; Mandal, Taniya; Tripathy, Prasanta K.

    2014-01-01

    In this paper we study spherically symmetric single-centered attractors in N=2 supergravity in four dimensions. The attractor points are obtained by extremising the effective black hole potential in the moduli space. Both supersymmetric as well as non-supersymmetric attractors exist in mutually exclusive domains of the charge lattice. We construct axion free supersymmetric as well as non-supersymmetric multiple attractors in a simple two parameter model. We further obtain explicit examples of two distinct non-supersymmetric attractors in type IIA string theory compactified on K3×T"2 carrying D0−D4−D6 charges. We compute the entropy of these attractors and analyse their stability in detail.

  18. High performance computing in science and engineering '09: transactions of the High Performance Computing Center, Stuttgart (HLRS) 2009

    National Research Council Canada - National Science Library

    Nagel, Wolfgang E; Kröner, Dietmar; Resch, Michael

    2010-01-01

    ...), NIC/JSC (J¨ u lich), and LRZ (Munich). As part of that strategic initiative, in May 2009 already NIC/JSC has installed the first phase of the GCS HPC Tier-0 resources, an IBM Blue Gene/P with roughly 300.000 Cores, this time in J¨ u lich, With that, the GCS provides the most powerful high-performance computing infrastructure in Europe alread...

  19. Information sharing guidebook for transportation management centers, emergency operations centers, and fusion centers

    Science.gov (United States)

    2010-06-01

    This guidebook provides an overview of the mission and functions of transportation management centers, emergency operations centers, and fusion centers. The guidebook focuses on the types of information these centers produce and manage and how the sh...

  20. Information sharing guidebook for transportation management centers, emergency operations centers, and fusion centers.

    Science.gov (United States)

    2010-06-01

    This guidebook provides an overview of the mission and functions of transportation management centers, emergency operations centers, and fusion centers. The guidebook focuses on the types of information these centers produce and manage and how the sh...

  1. Computer aided system engineering for space construction

    Science.gov (United States)

    Racheli, Ugo

    1989-01-01

    This viewgraph presentation covers the following topics. Construction activities envisioned for the assembly of large platforms in space (as well as interplanetary spacecraft and bases on extraterrestrial surfaces) require computational tools that exceed the capability of conventional construction management programs. The Center for Space Construction is investigating the requirements for new computational tools and, at the same time, suggesting the expansion of graduate and undergraduate curricula to include proficiency in Computer Aided Engineering (CAE) though design courses and individual or team projects in advanced space systems design. In the center's research, special emphasis is placed on problems of constructability and of the interruptability of planned activity sequences to be carried out by crews operating under hostile environmental conditions. The departure point for the planned work is the acquisition of the MCAE I-DEAS software, developed by the Structural Dynamics Research Corporation (SDRC), and its expansion to the level of capability denoted by the acronym IDEAS**2 currently used for configuration maintenance on Space Station Freedom. In addition to improving proficiency in the use of I-DEAS and IDEAS**2, it is contemplated that new software modules will be developed to expand the architecture of IDEAS**2. Such modules will deal with those analyses that require the integration of a space platform's configuration with a breakdown of planned construction activities and with a failure modes analysis to support computer aided system engineering (CASE) applied to space construction.

  2. Dynamic provisioning of local and remote compute resources with OpenStack

    Science.gov (United States)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  3. Development of computational science in JAEA. R and D of simulation

    International Nuclear Information System (INIS)

    Nakajima, Norihiro; Araya, Fumimasa; Hirayama, Toshio

    2006-01-01

    R and D of computational science in JAEA (Japan Atomic Energy Agency) is described. Environment of computer, R and D system in CCSE (Center for Computational Science and e-Systems), joint computational science researches in Japan and world, development of computer technologies, the some examples of simulation researches, 3-dimensional image vibrational platform system, simulation researches of FBR cycle techniques, simulation of large scale thermal stress for development of steam generator, simulation research of fusion energy techniques, development of grid computing technology, simulation research of quantum beam techniques and biological molecule simulation researches are explained. Organization of JAEA, development of computational science in JAEA, network of JAEA, international collaboration of computational science, and environment of ITBL (Information-Technology Based Laboratory) project are illustrated. (S.Y.)

  4. Simulation of Thermal Distribution and Airflow for Efficient Energy Consumption in a Small Data Centers

    Directory of Open Access Journals (Sweden)

    Jing Ni

    2017-04-01

    Full Text Available Data centers have become ubiquitous in the last few years in an attempt to keep pace with the processing and storage needs of the Internet and cloud computing. The steady growth in the heat densities of IT servers leads to a rise in the energy needed to cool them, and constitutes approximately 40% of the power consumed by data centers. However, many data centers feature redundant air conditioning systems that contribute to inefficient air distribution, which significantly increases energy consumption. This remains an insufficiently explored problem. In this paper, a typical, small data center with tiles for an air supply system with a raised floor is used. We use a fluent (Computational Fluid Dynamics, CFD to simulate thermal distribution and airflow, and investigate the optimal conditions of air distribution to save energy. The effects of the airflow outlet angle along the tile, the cooling temperature and the rate of airflow on the beta index as well as the energy utilization index are discussed, and the optimal conditions are obtained. The reasonable airflow distribution achieved using 3D CFD calculations and the parameter settings provided in this paper can help reduce the energy consumption of data centers by improving the efficiency of the air conditioning.

  5. Anatomy of a Security Operations Center

    Science.gov (United States)

    Wang, John

    2010-01-01

    Many agencies and corporations are either contemplating or in the process of building a cyber Security Operations Center (SOC). Those Agencies that have established SOCs are most likely working on major revisions or enhancements to existing capabilities. As principle developers of the NASA SOC; this Presenters' goals are to provide the GFIRST community with examples of some of the key building blocks of an Agency scale cyber Security Operations Center. This presentation viII include the inputs and outputs, the facilities or shell, as well as the internal components and the processes necessary to maintain the SOC's subsistence - in other words, the anatomy of a SOC. Details to be presented include the SOC architecture and its key components: Tier 1 Call Center, data entry, and incident triage; Tier 2 monitoring, incident handling and tracking; Tier 3 computer forensics, malware analysis, and reverse engineering; Incident Management System; Threat Management System; SOC Portal; Log Aggregation and Security Incident Management (SIM) systems; flow monitoring; IDS; etc. Specific processes and methodologies discussed include Incident States and associated Work Elements; the Incident Management Workflow Process; Cyber Threat Risk Assessment methodology; and Incident Taxonomy. The Evolution of the Cyber Security Operations Center viII be discussed; starting from reactive, to proactive, and finally to proactive. Finally, the resources necessary to establish an Agency scale SOC as well as the lessons learned in the process of standing up a SOC viII be presented.

  6. Evaluation of Rankine cycle air conditioning system hardware by computer simulation

    Science.gov (United States)

    Healey, H. M.; Clark, D.

    1978-01-01

    A computer program for simulating the performance of a variety of solar powered Rankine cycle air conditioning system components (RCACS) has been developed. The computer program models actual equipment by developing performance maps from manufacturers data and is capable of simulating off-design operation of the RCACS components. The program designed to be a subroutine of the Marshall Space Flight Center (MSFC) Solar Energy System Analysis Computer Program 'SOLRAD', is a complete package suitable for use by an occasional computer user in developing performance maps of heating, ventilation and air conditioning components.

  7. Personal computers in accelerator control

    International Nuclear Information System (INIS)

    Anderssen, P.S.

    1988-01-01

    The advent of the personal computer has created a popular movement which has also made a strong impact on science and engineering. Flexible software environments combined with good computational performance and large storage capacities are becoming available at steadily decreasing costs. Of equal importance, however, is the quality of the user interface offered on many of these products. Graphics and screen interaction is available in ways that were only possible on specialized systems before. Accelerator engineers were quick to pick up the new technology. The first applications were probably for controllers and data gatherers for beam measurement equipment. Others followed, and today it is conceivable to make personal computer a standard component of an accelerator control system. This paper reviews the experience gained at CERN so far and describes the approach taken in the design of the common control center for the SPS and the future LEP accelerators. The design goal has been to be able to integrate personal computers into the accelerator control system and to build the operator's workplace around it. (orig.)

  8. Web Solutions Inspire Cloud Computing Software

    Science.gov (United States)

    2013-01-01

    An effort at Ames Research Center to standardize NASA websites unexpectedly led to a breakthrough in open source cloud computing technology. With the help of Rackspace Inc. of San Antonio, Texas, the resulting product, OpenStack, has spurred the growth of an entire industry that is already employing hundreds of people and generating hundreds of millions in revenue.

  9. The Lister Hill National Center for Biomedical Communications.

    Science.gov (United States)

    Smith, K A

    1994-09-01

    On August 3, 1968, the Joint Resolution of the Congress established the program and construction of the Lister Hill National Center for Biomedical Communications. The facility dedicated in 1980 contains the latest in computer and communications technologies. The history, program requirements, construction management, and general planning are discussed including technical issues regarding cabling, systems functions, heating, ventilation, and air conditioning system (HVAC), fire suppression, research and development laboratories, among others.

  10. Canadian ATLAS data center to support CERN's LHC

    CERN Multimedia

    2006-01-01

    "The biggest science experiment in history is currently underway at the world-famous CERN labs in Switzerland, and Canada is poised to play a critical role in its success. Thanks to a $10.5 million investment announced by the Canada Foundation for Innovation (CFI), an ultra-sophisticated computing facility -- the ATLAS Data Center -- will be created to support the ATLAS project at CERN's Large Hadron Collider (LHC)." (1 page)

  11. Fractal geometry and computer graphics

    CERN Document Server

    Sakas, Georgios; Peitgen, Heinz-Otto; Englert, Gabriele

    1992-01-01

    Fractal geometry has become popular in the last 15 years, its applications can be found in technology, science, or even arts. Fractal methods and formalism are seen today as a general, abstract, but nevertheless practical instrument for the description of nature in a wide sense. But it was Computer Graphics which made possible the increasing popularity of fractals several years ago, and long after their mathematical formulation. The two disciplines are tightly linked. The book contains the scientificcontributions presented in an international workshop in the "Computer Graphics Center" in Darmstadt, Germany. The target of the workshop was to present the wide spectrum of interrelationships and interactions between Fractal Geometry and Computer Graphics. The topics vary from fundamentals and new theoretical results to various applications and systems development. All contributions are original, unpublished papers.The presentations have been discussed in two working groups; the discussion results, together with a...

  12. Low cost spacecraft computers: Oxymoron or future trend?

    Science.gov (United States)

    Manning, Robert M.

    1993-01-01

    Over the last few decades, application of current terrestrial computer technology in embedded spacecraft control systems has been expensive and wrought with many technical challenges. These challenges have centered on overcoming the extreme environmental constraints (protons, neutrons, gamma radiation, cosmic rays, temperature, vibration, etc.) that often preclude direct use of commercial off-the-shelf computer technology. Reliability, fault tolerance and power have also greatly constrained the selection of spacecraft control system computers. More recently, new constraints are being felt, cost and mass in particular, that have again narrowed the degrees of freedom spacecraft designers once enjoyed. This paper discusses these challenges, how they were previously overcome, how future trends in commercial computer technology will simplify (or hinder) selection of computer technology for spacecraft control applications, and what spacecraft electronic system designers can do now to circumvent them.

  13. 76 FR 21091 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Centers for Medicare & Medicaid...

    Science.gov (United States)

    2011-04-14

    ...: Social Security Administration (SSA). ACTION: Notice of a renewal of an existing computer matching...: A. General The Computer Matching and Privacy Protection Act of 1988 (Public Law (Pub. L.) 100-503...), as amended, (Pub. L. 100-503, the Computer Matching and Privacy Protection Act (CMPPA) of 1988), the...

  14. Computer network access to scientific information systems for minority universities

    Science.gov (United States)

    Thomas, Valerie L.; Wakim, Nagi T.

    1993-08-01

    The evolution of computer networking technology has lead to the establishment of a massive networking infrastructure which interconnects various types of computing resources at many government, academic, and corporate institutions. A large segment of this infrastructure has been developed to facilitate information exchange and resource sharing within the scientific community. The National Aeronautics and Space Administration (NASA) supports both the development and the application of computer networks which provide its community with access to many valuable multi-disciplinary scientific information systems and on-line databases. Recognizing the need to extend the benefits of this advanced networking technology to the under-represented community, the National Space Science Data Center (NSSDC) in the Space Data and Computing Division at the Goddard Space Flight Center has developed the Minority University-Space Interdisciplinary Network (MU-SPIN) Program: a major networking and education initiative for Historically Black Colleges and Universities (HBCUs) and Minority Universities (MUs). In this paper, we will briefly explain the various components of the MU-SPIN Program while highlighting how, by providing access to scientific information systems and on-line data, it promotes a higher level of collaboration among faculty and students and NASA scientists.

  15. Computer software summaries. Numbers 1 through 423

    International Nuclear Information System (INIS)

    1979-09-01

    The National Energy Software Center (NESC) serves as the software exchange and information center for the US Department of Energy and the Nuclear Regulatory Commission. A major activity of the Center is the preparation and publication of two reports issued periodically - the Center's compilation of program abstracts, ANL-7411, and this software summaries report, ANL-8040. The abstracts describe the softward packages available in the software exchange library maintained and distributed by the Center. The summaries describe agency-sponsored software that is at the specification stage, under development, being checked out, in use, or available at agency offices, laboratories, and contractor installations. Summaries describe software that is not included in the NESC library due to its preliminary status or because it is believed to be of limited interest. The purpose of the summaries report is to keep agency and contractor personnel informed as to the existence, status, and availability of computer programs within the agency, and thereby minimize duplication costs and maximize the value of agency software development efforts

  16. DataCenterCooling. Climatization for extreme low energy consumption. Part 1; DataCenterKoeling. Klimatisering voor extreem laag energiegebruik. Deel 1

    Energy Technology Data Exchange (ETDEWEB)

    Havenaar, D.

    2012-12-15

    A data center (or computer center) for IT equipment (e.g. servers) has various amenities (e.g. air conditioning, fire alarm system and backup energy / power emergency supply). Additionally, a datacenter consist of fast Internet connections and physical security measures with access and camera control. Previously, each company had irs own server space with energy consuming comfort air conditioning systems in which their ICT equipment was placed [Dutch] Een datacenter (rekencentrum) voor bedrijfskritische ICT-apparatuur zoals servers, heeft diverse voorzieningen (klimaatbeheersing, brandmeldsysteem en back-up energie/noodstroomvoorziening. Daarnaast bevat een datacenter snelle internetverbindingen en is het voorzien van fysieke veiligheidsmaatregelen met toegangscontrole en camerabewaking. Voorheen hadden bedrijven ieder hun eigen serverruimte met energie verslindende comfort airco installaties) waarin hun ICT-apparatuur was geplaatst.

  17. DataCenterCooling. Climatization for extreme low energy consumption. Part 2; DataCenterKoeling. Klimatisering voor extreem laag energiegebruik. Deel 2

    Energy Technology Data Exchange (ETDEWEB)

    Havenaar, D.

    2013-01-15

    A data center (or computer center) for IT equipment (e.g. servers) has various amenities (e.g. air conditioning, fire alarm system and backup energy / power emergency supply). Additionally, a datacenter consist of fast Internet connections and physical security measures with access and camera control. Previously, each company had irs own server space with energy consuming comfort air conditioning systems in which their ICT equipment was placed [Dutch] Een datacenter (rekencentrum) voor bedrijfskritische ICT-apparatuur zoals servers, heeft diverse voorzieningen (klimaatbeheersing, brandmeldsysteem en back-up energie/noodstroomvoorziening. Daarnaast bevat een datacenter snelle internetverbindingen en is het voorzien van fysieke veiligheidsmaatregelen met toegangscontrole en camerabewaking. Voorheen hadden bedrijven ieder hun eigen serverruimte met energie verslindende comfort airco installaties) waarin hun ICT-apparatuur was geplaatst.

  18. Energy Consumption Management of Virtual Cloud Computing Platform

    Science.gov (United States)

    Li, Lin

    2017-11-01

    For energy consumption management research on virtual cloud computing platforms, energy consumption management of virtual computers and cloud computing platform should be understood deeper. Only in this way can problems faced by energy consumption management be solved. In solving problems, the key to solutions points to data centers with high energy consumption, so people are in great need to use a new scientific technique. Virtualization technology and cloud computing have become powerful tools in people’s real life, work and production because they have strong strength and many advantages. Virtualization technology and cloud computing now is in a rapid developing trend. It has very high resource utilization rate. In this way, the presence of virtualization and cloud computing technologies is very necessary in the constantly developing information age. This paper has summarized, explained and further analyzed energy consumption management questions of the virtual cloud computing platform. It eventually gives people a clearer understanding of energy consumption management of virtual cloud computing platform and brings more help to various aspects of people’s live, work and son on.

  19. Applications of automatic differentiation in computational fluid dynamics

    Science.gov (United States)

    Green, Lawrence L.; Carle, A.; Bischof, C.; Haigler, Kara J.; Newman, Perry A.

    1994-01-01

    Automatic differentiation (AD) is a powerful computational method that provides for computing exact sensitivity derivatives (SD) from existing computer programs for multidisciplinary design optimization (MDO) or in sensitivity analysis. A pre-compiler AD tool for FORTRAN programs called ADIFOR has been developed. The ADIFOR tool has been easily and quickly applied by NASA Langley researchers to assess the feasibility and computational impact of AD in MDO with several different FORTRAN programs. These include a state-of-the-art three dimensional multigrid Navier-Stokes flow solver for wings or aircraft configurations in transonic turbulent flow. With ADIFOR the user specifies sets of independent and dependent variables with an existing computer code. ADIFOR then traces the dependency path throughout the code, applies the chain rule to formulate derivative expressions, and generates new code to compute the required SD matrix. The resulting codes have been verified to compute exact non-geometric and geometric SD for a variety of cases. in less time than is required to compute the SD matrix using centered divided differences.

  20. New Challenges for Computing in High Energy Physics

    International Nuclear Information System (INIS)

    Santoro, Alberto

    2003-01-01

    In view of the new scientific programs established for the LHC (Large Hadron Collider) era, the way to face the technological challenges in computing was develop a new concept of GRID computing. We show some examples and, in particular, a proposal for high energy physicists in countries like Brazil. Due to the big amount of data and the need of close collaboration it will be impossible to work in research centers and universities very far from Fermilab or CERN unless a GRID architecture is built. An important effort is being made by the international community to up to date their computing infrastructure and networks