WorldWideScience

Sample records for computing center stuttgart

  1. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  2. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  3. Visit to MPA Stuttgart - Universitaet Stuttgart

    International Nuclear Information System (INIS)

    1980-01-01

    The booklet contains the introduction lectures to the following demonstration tests of the MPA Stuttgart: - Large scale specimen tensile testing; - full size vessel; - high speed tensile machine; - explosion high speed tensile machine; - heat affected zone simulation. (RW)

  4. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  5. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  6. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  7. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  8. The solar-assisted district heating concept in Stuttgart-Burgholzhof; Das solarunterstuetzte Nahwaermekonzept Stuttgart-Burgholzhof

    Energy Technology Data Exchange (ETDEWEB)

    Grosshans, D. [Neckarwerke Stuttgart AG (Germany)

    1998-12-31

    The housing area Burgholzhof is located on a hill above the valley of Stuttgart in the city borough of Canstatt. The building area has a size of 13.4 hectare and will contain 1,360 flats with a total heated area of 86,000 square metres that can house 2,800 people. Heat will be supplied by a heating power station, a plant for thermal use of solar energy and a heat distribution system built by Neckarwerke Stuttgart (Stuttgart utility company). The Neckarwerke Stuttgart paid the construction costs of the plant for thermal utilisation of solar energy. They will not be included in the energy price charged to the consumer. (orig.) [Deutsch] Das Wohngebiet `Burgholzhof` liegt auf einer Anhoehe ueber dem Talkessel der Landeshauptstadt Stuttgart im Stadtteil Bad-Cannstatt. Die Bebauungsflaeche betraegt nach einer im Jahr 1997 durchgefuehrten Arrondierung 13,4 ha. Im Endausbau werden etwa 1360 Wohneinheiten mit einer beheizten Gesamtflaeche von etwa 86 000 m{sup 2} entstanden sein. Somit koennen etwa 2800 Buergerinnen und Buerger in diesem neuen Stadtteil wohnen. Zur Waermeversorgung wurde ein Heizwerk, eine Anlage zur thermischen Nutzung der Sonnenenergie und ein Waermeverteilsystem von den NWS erstellt. Die Kosten fuer den Bau der Anlage zur thermischen Nutzung der Sonnenenergie wurden von den NWS uebernommen. Sie sind somit kein Bestandteil des an die Nutzer zu verrechnenden Waermepreises. (orig.)

  9. History and practice of material research on the examples of Material Testing and Materialpruefungsanstalt (MPA) Stuttgart, liquid crystals and screen technology as well as superconductivity. An interdisciplinary teaching project of the University of Stuttgart; Geschichte und Praxis der Materialforschung an den Beispielen Materialpruefung und Materialpruefungsanstalt (MPA) Stuttgart, Fluessigkristalle und Bildschirmtechnik sowie Supraleitung. Ein interdisziplinaeres Lehrprojekt der Universitaet Stuttgart

    Energy Technology Data Exchange (ETDEWEB)

    Hentschel, Klaus; Webel, Josef (eds.)

    2016-07-01

    The knowledge of material research and its history is not very common among scientists and engineers alike. Within the scope of an interdisciplinary teaching project carried out for the first time in the summer semester 2014 and ever since then every summer semester at the University of Stuttgart, an attempt is made to approach material research both from a scientific, technical and historical perspective. The Material Testing and Materials Testing Institute in Stuttgart (MPA), the liquid crystals and the screen technology as well as the superconductivity were selected as topics, which have a long tradition in research and teaching in Stuttgart. In this anthology the materials of the teaching project are summarized. [German] Das Wissen um die Materialforschung und ihre Geschichte ist selbst unter Naturwissenschaftlern und Ingenieuren wenig verbreitet. Im Rahmen eines erstmals im Sommersemester 2014 und seither dann jedes Sommersemester an der Universitaet Stuttgart durchgefuehrten interdisziplinaeren Lehrprojektes wird deshalb der Versuch unternommen, sich der Materialforschung sowohl aus naturwissenschaftlicher und technischer als auch aus historischer Perspektive anzunaehern. Als Themenbereiche wurden dafuer die Materialpruefung und Materialpruefungsanstalt (MPA) Stuttgart, die Fluessigkristalle und die Bildschirmtechnik sowie die Supraleitung ausgewaehlt, die in Stuttgart auf eine lange Tradition in Forschung und Lehre zurueckblicken. Im vorliegenden Sammelband sind die Materialien des Lehrprojektes zusammengefasst.

  10. High performance computing in science and engineering '09: transactions of the High Performance Computing Center, Stuttgart (HLRS) 2009

    National Research Council Canada - National Science Library

    Nagel, Wolfgang E; Kröner, Dietmar; Resch, Michael

    2010-01-01

    ...), NIC/JSC (J¨ u lich), and LRZ (Munich). As part of that strategic initiative, in May 2009 already NIC/JSC has installed the first phase of the GCS HPC Tier-0 resources, an IBM Blue Gene/P with roughly 300.000 Cores, this time in J¨ u lich, With that, the GCS provides the most powerful high-performance computing infrastructure in Europe alread...

  11. Making Space Cool - Successful Outreach at Yuri's Night Stuttgart

    Science.gov (United States)

    Hill, Christine; Bretschneider, Jens; Nathanson, Emil; Grossmann, Agnes

    Yuri’s Night - also known as the “World Space Party” - is the annual celebration commemorating Gagarin’s historic flight on April 12, 1961, and the maiden voyage of the American space shuttle on April 12, 1981. It was created by young space enthusiasts in 2000 at the annual Space Generation Congress and was first celebrated in 2001, registering more than 60 events around the world from the start. Since then the interest in celebrating human spaceflight grew constantly to over 350 events across all seven continents in 2013. The honoring of Yuri Gagarin’s first spaceflight in Stuttgart started in 2007 and resulted in one of the largest events outside the US, with five parties following in 2008, 2009, 2010, 2012 and 2013. The Stuttgart event was originally organized as space party for an audience at the age of 20 and beyond including informative aspects at the afternoon and a following party far into the night. Since 2010 the focus of the Yuri’s Night Stuttgart is to bring awareness of space exploration to people of all ages, including particularly many participatory hands-on space activities for kids and families that attract hundreds of visitors every year. As much as Yuri’s Night is a worldwide party, the events in Stuttgart successfully concentrate on educational aspects that help to inspire new generations of space enthusiasts who will ultimately shape the future of space exploration. It is therefore not only a look back to one of the greatest achievements of the 20th Century, but it is also a look into the future: from multinational cooperation on the International Space Station to benefit of space flight to the introduction of the next generation of space technology. This paper will introduce the celebrations of Yuri’s Night in Stuttgart of the past four years and compare them to the early events. It provides a summary of the development of the Yuri’s Night including educational aspects, public relations and media attraction and gives

  12. COMPUTATIONAL SCIENCE CENTER

    International Nuclear Information System (INIS)

    DAVENPORT, J.

    2006-01-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together

  13. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT,J.

    2004-11-01

    The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security.

  14. Integrity of power plant components. Research performed by Professor K. Kussmaul at MPA Stuttgart

    International Nuclear Information System (INIS)

    Roos, E.; Maile, K.

    1998-01-01

    The history of the Staatliche Materialpruefungsanstalt - the State Material Testing Institute - (MPA) Stuttgart is characterized by the scientific qualifications of its directors. During the 40 years of work at the MPA, Professor Karl Kussmaul has come up to the expectations of this tradition and has with his works firmly established the renowned 'Stuttgarter Schule' - the School of Stuttgart - founded by Bach. (orig.) [de

  15. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT, J.

    2006-11-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to

  16. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT, J.

    2005-11-01

    The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

  17. Center for Advanced Computational Technology

    Science.gov (United States)

    Noor, Ahmed K.

    2000-01-01

    The Center for Advanced Computational Technology (ACT) was established to serve as a focal point for diverse research activities pertaining to application of advanced computational technology to future aerospace systems. These activities include the use of numerical simulations, artificial intelligence methods, multimedia and synthetic environments, and computational intelligence, in the modeling, analysis, sensitivity studies, optimization, design and operation of future aerospace systems. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The Center has four specific objectives: 1) conduct innovative research on applications of advanced computational technology to aerospace systems; 2) act as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); 3) help in identifying future directions of research in support of the aeronautical and space missions of the twenty-first century; and 4) help in the rapid transfer of research results to industry and in broadening awareness among researchers and engineers of the state-of-the-art in applications of advanced computational technology to the analysis, design prototyping and operations of aerospace and other high-performance engineering systems. In addition to research, Center activities include helping in the planning and coordination of the activities of a multi-center team of NASA and JPL researchers who are developing an intelligent synthesis environment for future aerospace systems; organizing workshops and national symposia; as well as writing state-of-the-art monographs and NASA special publications on timely topics.

  18. Activities of the MPA Stuttgart in connection with QA of nuclear power plants

    International Nuclear Information System (INIS)

    Maier, H.J.

    1980-01-01

    The MPA Stuttgart (Institute of Testing of Materials, University of Stuttgart) is concerned with quality assurance of german nuclear power plants, additional to the technical inspection organizations and the plant owners. The orders are given by state authorities, federal authorities and the RSK (reactor security commission). There are shown some examples of this work, concerning with materials and testing of materials. (orig./RW)

  19. Stuttgart's city archive believes in ice bank technology from Isocal. Fresh keeping treatment for the history of Stuttgart; Stuttgarter Stadtarchiv setzt auf Eisspeicher-Technologie von IsocalL. Frischhaltekur fuer Stuttgarts Geschichte

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, Uwe

    2011-03-15

    Ancient scrolls, historical construction plans or handwritten fragments of well-known literati from the region - the treasures of the Stuttgart's city archive are irretrievable. With the reopening of the archive on 24th January, 2011 in a completely retrofitted historical warehouse ensemble in Bad Cannstatt (Federal Republic of Germany) these valuable documents already are stored in the new magazine. Related with this, significant improvements are accompanied by a substantially larger available space, short ways and mobile shelf walls. Furthermore, the climatic storage conditions for the sensitive archives now were put at the highest level: Henceforth, the old papers, drawings, photos or paintings can be stored at a constant temperature of 18 Celsius and at a air humidity of 50 %.

  20. Building the Teraflops/Petabytes Production Computing Center

    International Nuclear Information System (INIS)

    Kramer, William T.C.; Lucas, Don; Simon, Horst D.

    1999-01-01

    In just one decade, the 1990s, supercomputer centers have undergone two fundamental transitions which require rethinking their operation and their role in high performance computing. The first transition in the early to mid-1990s resulted from a technology change in high performance computing architecture. Highly parallel distributed memory machines built from commodity parts increased the operational complexity of the supercomputer center, and required the introduction of intellectual services as equally important components of the center. The second transition is happening in the late 1990s as centers are introducing loosely coupled clusters of SMPs as their premier high performance computing platforms, while dealing with an ever-increasing volume of data. In addition, increasing network bandwidth enables new modes of use of a supercomputer center, in particular, computational grid applications. In this paper we describe what steps NERSC is taking to address these issues and stay at the leading edge of supercomputing centers.; N

  1. Activity report of Computing Research Center

    Energy Technology Data Exchange (ETDEWEB)

    1997-07-01

    On April 1997, National Laboratory for High Energy Physics (KEK), Institute of Nuclear Study, University of Tokyo (INS), and Meson Science Laboratory, Faculty of Science, University of Tokyo began to work newly as High Energy Accelerator Research Organization after reconstructing and converting their systems, under aiming at further development of a wide field of accelerator science using a high energy accelerator. In this Research Organization, Applied Research Laboratory is composed of four Centers to execute assistance of research actions common to one of the Research Organization and their relating research and development (R and D) by integrating the present four centers and their relating sections in Tanashi. What is expected for the assistance of research actions is not only its general assistance but also its preparation and R and D of a system required for promotion and future plan of the research. Computer technology is essential to development of the research and can communize for various researches in the Research Organization. On response to such expectation, new Computing Research Center is required for promoting its duty by coworking and cooperating with every researchers at a range from R and D on data analysis of various experiments to computation physics acting under driving powerful computer capacity such as supercomputer and so forth. Here were described on report of works and present state of Data Processing Center of KEK at the first chapter and of the computer room of INS at the second chapter and on future problems for the Computing Research Center. (G.K.)

  2. Center for computer security: Computer Security Group conference. Summary

    Energy Technology Data Exchange (ETDEWEB)

    None

    1982-06-01

    Topics covered include: computer security management; detection and prevention of computer misuse; certification and accreditation; protection of computer security, perspective from a program office; risk analysis; secure accreditation systems; data base security; implementing R and D; key notarization system; DOD computer security center; the Sandia experience; inspector general's report; and backup and contingency planning. (GHT)

  3. History and practice of material research on the examples of Material Testing and Materialpruefungsanstalt (MPA) Stuttgart, liquid crystals and screen technology as well as superconductivity. An interdisciplinary teaching project of the University of Stuttgart

    International Nuclear Information System (INIS)

    Hentschel, Klaus; Webel, Josef

    2016-01-01

    The knowledge of material research and its history is not very common among scientists and engineers alike. Within the scope of an interdisciplinary teaching project carried out for the first time in the summer semester 2014 and ever since then every summer semester at the University of Stuttgart, an attempt is made to approach material research both from a scientific, technical and historical perspective. The Material Testing and Materials Testing Institute in Stuttgart (MPA), the liquid crystals and the screen technology as well as the superconductivity were selected as topics, which have a long tradition in research and teaching in Stuttgart. In this anthology the materials of the teaching project are summarized. [de

  4. 14th annual Results and Review Workshop on High Performance Computing in Science and Engineering

    CERN Document Server

    Nagel, Wolfgang E; Resch, Michael M; Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2011; High Performance Computing in Science and Engineering '11

    2012-01-01

    This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2011. The reports cover all fields of computational science and engineering, ranging from CFD to computational physics and chemistry, to computer science, with a special emphasis on industrially relevant applications. Presenting results for both vector systems and microprocessor-based systems, the book allows readers to compare the performance levels and usability of various architectures. As HLRS

  5. Digital optical computers at the optoelectronic computing systems center

    Science.gov (United States)

    Jordan, Harry F.

    1991-01-01

    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  6. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  7. National Energy Research Scientific Computing Center (NERSC): Advancing the frontiers of computational science and technology

    Energy Technology Data Exchange (ETDEWEB)

    Hules, J. [ed.

    1996-11-01

    National Energy Research Scientific Computing Center (NERSC) provides researchers with high-performance computing tools to tackle science`s biggest and most challenging problems. Founded in 1974 by DOE/ER, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that followed. Over the years the center`s name was changed to the National Magnetic Fusion Energy Computer Center and then to NERSC; it was relocated to LBNL. NERSC, one of the largest unclassified scientific computing resources in the world, is the principal provider of general-purpose computing services to DOE/ER programs: Magnetic Fusion Energy, High Energy and Nuclear Physics, Basic Energy Sciences, Health and Environmental Research, and the Office of Computational and Technology Research. NERSC users are a diverse community located throughout US and in several foreign countries. This brochure describes: the NERSC advantage, its computational resources and services, future technologies, scientific resources, and computational science of scale (interdisciplinary research over a decade or longer; examples: combustion in engines, waste management chemistry, global climate change modeling).

  8. Landgericht Stuttgart condemns power price boycotters to payment

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    With its judgment of 18th December 1980 - 10 0 164/80 - the LG Stuttgart has condemned an electricity customer who, being an opponent of nuclear energy, had been keeping back ten per cent of the power price and transfering this amount to a trust account to the payment of the arrears. A public utility company cooperating with a nuclear power plant and supplying its customers with electric power from this npp had submitted its claim. (orig./HP) [de

  9. High-voltage shared-service line in the Stuttgart area

    Energy Technology Data Exchange (ETDEWEB)

    Goerler, W; Benz, A [Technische Werke der Stadt Stuttgart A.G. (F.R. Germany)

    1976-01-01

    In congested areas the line construction engineer has to cope with a great variety of difficulties - amenity problems, line crossings, and road crossings. The authors describe the prerequisites for and the construction of a HV shared-service line of approx. 25 km in the congested area of Stuttgart, where several three-phase and single- phase a.c. systems are run on one set of pylons.

  10. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  11. Transportation Research & Analysis Computing Center

    Data.gov (United States)

    Federal Laboratory Consortium — The technical objectives of the TRACC project included the establishment of a high performance computing center for use by USDOT research teams, including those from...

  12. Human-centered Computing: Toward a Human Revolution

    OpenAIRE

    Jaimes, Alejandro; Gatica-Perez, Daniel; Sebe, Nicu; Huang, Thomas S.

    2007-01-01

    Human-centered computing studies the design, development, and deployment of mixed-initiative human-computer systems. HCC is emerging from the convergence of multiple disciplines that are concerned both with understanding human beings and with the design of computational artifacts.

  13. Elephants produce their electricity by themselves. Wilhelma an integrated energy efficiency concept of Stuttgart (SEE); Elefanten machen ihren Strom selbst. Die Wilhelma als integriertes Energieeffizienzkonzept der Stadt Stuttgart (SEE)

    Energy Technology Data Exchange (ETDEWEB)

    Hilse, Annika; Leix, Carmen; Fischer, Klaus; Kranert, Martin [Stuttgart Univ. (Germany). Inst. fuer Siedlungswasserbau, Wasserguete- und Abfallwirtschaft

    2013-10-01

    As a part of the overall project Stuttgart city with energy efficiency - SEE an integrated bioenergy concept Wilhelma at the Institute for Sanitary Engineering, Water Quality and Waste Management (ISWA) in cooperation with the Institute of Energy Economics and the Rational Use of Energy (IER) is created. The biomass potential analysis was recently completed; a differentiated analysis of energy demand is still pending. The Stuttgart Zoo Wilhelma has a significant biomass potential. With about 340 acres of gardens and parks which fall Wilhelma to care and the residue biomass of around 9000 zoo animals, offers the Stuttgart Zoo and Garden Wilhelma, together with the urban green space a high biomass potential, which is currently unused. For energy recovery through anaerobic digestion in a biogas plant are suitable 3900 t/a of biomass, which are 87% of the total exploitable biomass. For energy recovery by incineration are suitable 600 t/a of biomass, equivalent to the remaining 13% of recoverable biomass. This could be a total energy potential of about 6219 MWh/year are covered if the biomass is fully developed. Of these, 64% come from the fermentation and 36% from burning. About the determined biomass potential can be expected to be covered the electricity and heat demand of up to 16% (integrated bio-energy concept). To fully cover the energy requirements possibility of further use of renewable energy sources (e.g. solar panels on the roofs) must be examined and evaluated. (orig.)

  14. Engineering computations at the national magnetic fusion energy computer center

    International Nuclear Information System (INIS)

    Murty, S.

    1983-01-01

    The National Magnetic Fusion Energy Computer Center (NMFECC) was established by the U.S. Department of Energy's Division of Magnetic Fusion Energy (MFE). The NMFECC headquarters is located at Lawrence Livermore National Laboratory. Its purpose is to apply large-scale computational technology and computing techniques to the problems of controlled thermonuclear research. In addition to providing cost effective computing services, the NMFECC also maintains a large collection of computer codes in mathematics, physics, and engineering that is shared by the entire MFE research community. This review provides a broad perspective of the NMFECC, and a list of available codes at the NMFECC for engineering computations is given

  15. The University of Stuttgart IKE/University of Arizona student research program

    International Nuclear Information System (INIS)

    Seale, R.L.

    1988-01-01

    The University of Stuttgart's Institut fuer Kernenergetik und Energiesysteme (IKE) and the University of Arizona have had a joint program in which graduate students from the IKE spend 1 yr on the University of Arizona campus. This program started in 1982 largely as the result of an initiative begun by K.H. Hoecker, then director of IKE. Since 1985, Alfred Voss has been director and the program has continued without interruption. Under the program, the Deutscher Akademisher Austauschdienst, a government agency of the Federal Republic of Germany has funded scholarships for students from IKE, which provide support for 1 yr during which they attend the University of Arizona as visiting student scholars and engage in a research project under the direction of one of our faculty, which satisfies a part of the requirements for the Ingenieur-Diplom Fachrichtung Maschinenbau. The students get credit for their research from the University of Stuttgart. The topics have a broad range and include software development, artificial intelligence, radiation transport, and energy management studies

  16. The role of dedicated data computing centers in the age of cloud computing

    Science.gov (United States)

    Caramarcu, Costin; Hollowell, Christopher; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2017-10-01

    Brookhaven National Laboratory (BNL) anticipates significant growth in scientific programs with large computing and data storage needs in the near future and has recently reorganized support for scientific computing to meet these needs. A key component is the enhanced role of the RHIC-ATLAS Computing Facility (RACF) in support of high-throughput and high-performance computing (HTC and HPC) at BNL. This presentation discusses the evolving role of the RACF at BNL, in light of its growing portfolio of responsibilities and its increasing integration with cloud (academic and for-profit) computing activities. We also discuss BNL’s plan to build a new computing center to support the new responsibilities of the RACF and present a summary of the cost benefit analysis done, including the types of computing activities that benefit most from a local data center vs. cloud computing. This analysis is partly based on an updated cost comparison of Amazon EC2 computing services and the RACF, which was originally conducted in 2012.

  17. Torre de televisión, Stuttgart

    Directory of Open Access Journals (Sweden)

    Leonhardt, F.

    1957-11-01

    Full Text Available Descripción general y estudio estático de la torre de televisión, construida en Stuttgart, de unos 310 m de altura total —antena metálica incluida— y provista de una especie de tronco de cono invertido, de mayor diámetro medio que el del fuste de la torre que, situado en la parte superior de la torre entre los 138 y 150 m de altura respecto al suelo, se ha subdividido, en altura, formando cuatro plantas, en las que se han alojado los servicies y un restaurante. Por encima de este cuerpo superior troncocónico se ha construido una terraza para poder disfrutar de espléndidas vistas panorámicas. La torre propiamente dicha tiene 10,8 m de diámetro en la base y 5,04 m en la parte superior, con una altura de 138 m entre estas dos secciones extremas.

  18. Pasarela suspendida en Stuttgart

    Directory of Open Access Journals (Sweden)

    Leonhardt, Fritz

    1963-06-01

    Full Text Available Following the gardening exhibition held in Stuttgart in 1961, the parks and gardens along the river Neckar have been greatly improved and modified, and as there is a considerable pedestrian traffic in this zone, it became necessary to build a footbridge to cross one of the main roadways. This footbridge leads on one side towards the station and on the other in the direction of the Theatre Palace. A public competition was organised to find the best design for this structure, and various firms submitted projects. The chosen one consists of a suspended, metal, flattened arch structure, and a thin walking deck. The arch has 90 ms span, and approach ramps, since the large number of pedestrians precluded the use of steps. The arched box girder is 0.5 ms deep and 5.50 ms wide. The beam is continuous, and hangs from cables which are attached at five points of the bridge, at points separated from each other 18, 17, 17, 17 and 18 ms. These cables run over a metal pillar. To give the pillar greater stability, a number of piles were driven into the ground, and the foundation block for the pillar was placed on these piles. Although the soil is not sufficiently stable to avoid small settlements of the foundations, this is not too important, since the structure is suspended, and small settlements, of even a few centimetres, would not modify the strength and stability of the project.Como consecuencia de la exposición de jardinería, celebrada en la ciudad de Stuttgart en 1961, los jardines y parques que se extienden hacia el río Neckar han experimentado una profunda modificación, y, como en esta zona el tráfico de peatones es de consideración, se ha creído necesario levantar una pasarela que cruce una avenida de gran circulación rodada. Este paso superior se bifurca, a un lado, en dos direcciones distintas cuyas rampas se orientan: una hacia la estación y la otra hacia la zona del palacio de la Diputación y Teatros. La estructura se sacó a concurso p

  19. Center for Computing Research Summer Research Proceedings 2015.

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, Andrew Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Parks, Michael L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-18

    The Center for Computing Research (CCR) at Sandia National Laboratories organizes a summer student program each summer, in coordination with the Computer Science Research Institute (CSRI) and Cyber Engineering Research Institute (CERI).

  20. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  1. NASA Center for Computational Sciences: History and Resources

    Science.gov (United States)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  2. Center for computation and visualization of geometric structures. [Annual], Progress report

    Energy Technology Data Exchange (ETDEWEB)

    1993-02-12

    The mission of the Center is to establish a unified environment promoting research, education, and software and tool development. The work is centered on computing, interpreted in a broad sense to include the relevant theory, development of algorithms, and actual implementation. The research aspects of the Center are focused on geometry; correspondingly the computational aspects are focused on three (and higher) dimensional visualization. The educational aspects are likewise centered on computing and focused on geometry. A broader term than education is `communication` which encompasses the challenge of explaining to the world current research in mathematics, and specifically geometry.

  3. AHPCRC - Army High Performance Computing Research Center

    Science.gov (United States)

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms

  4. UC Merced Center for Computational Biology Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Colvin, Michael; Watanabe, Masakatsu

    2010-11-30

    Final report for the UC Merced Center for Computational Biology. The Center for Computational Biology (CCB) was established to support multidisciplinary scientific research and academic programs in computational biology at the new University of California campus in Merced. In 2003, the growing gap between biology research and education was documented in a report from the National Academy of Sciences, Bio2010 Transforming Undergraduate Education for Future Research Biologists. We believed that a new type of biological sciences undergraduate and graduate programs that emphasized biological concepts and considered biology as an information science would have a dramatic impact in enabling the transformation of biology. UC Merced as newest UC campus and the first new U.S. research university of the 21st century was ideally suited to adopt an alternate strategy - to create a new Biological Sciences majors and graduate group that incorporated the strong computational and mathematical vision articulated in the Bio2010 report. CCB aimed to leverage this strong commitment at UC Merced to develop a new educational program based on the principle of biology as a quantitative, model-driven science. Also we expected that the center would be enable the dissemination of computational biology course materials to other university and feeder institutions, and foster research projects that exemplify a mathematical and computations-based approach to the life sciences. As this report describes, the CCB has been successful in achieving these goals, and multidisciplinary computational biology is now an integral part of UC Merced undergraduate, graduate and research programs in the life sciences. The CCB began in fall 2004 with the aid of an award from U.S. Department of Energy (DOE), under its Genomes to Life program of support for the development of research and educational infrastructure in the modern biological sciences. This report to DOE describes the research and academic programs

  5. Cool computers in a bunker. 10 000 kW of cold demand for 160 000 internet computers; Coole Rechner im Bunker. 10 000 kW Kaeltebedarf fuer 160 000 Internetrechner

    Energy Technology Data Exchange (ETDEWEB)

    Klein, S. [Combitherm GmbH, Stuttgart-Fellbach (Germany)

    2007-06-15

    In 2005, Combitherm GmbH of Stuttgart-Fellbach, a producer of refrigerators and heat pumps specializing in customized solutions, was given an unusual order as 1 and 1 Internet AG, one of the world's biggest internet providers, was looking for a cooling concept for their new central computer system near Baden-Baden, which was to become a central node in international data transmission. Combitherm already had experience with cold water units and free cooling elements in the 5000 kW range for a big computer center. The tasks were defined in close cooperation with the customer and with a Karlsruhe bureau of engineering consultants, and a refrigerating concept was developed. (orig.)

  6. On the complaint of unconstitutionality of the Stuttgart Court decisions against non-payers and part payers of electricity bills

    International Nuclear Information System (INIS)

    Fischerhof, H.

    1980-01-01

    In a decision dated December 20, 1979, the Federal Constitutional Court refused to accept the complaint of unconstitutionality brought by the Technische Werke (Municipal Utilities) of the city of Stuttgart (TWS) against two decisions by the Stuttgart Municipal Court in favor of non-payers and part payers of electricity bills. The reasons given for the refusal to accept the complaint state that there was every indication of the Stuttgart judgements being faulty. On the basis of this finding, TWS can continue to demand payment in full of their electricity bills. The Federal Constitutional Court maintains that civil rights could not be applied to TWS as a corporation under private law, whose activities exclusively consisted in providing the public with means of existence and whose shares were held in full by an agency with rights of jurisdiction. In a footnote, the author argues that the refusal to grant protection of civil rights to TWS was in conflict with the equal rights principle. (HSCH) [de

  7. Center for Computational Wind Turbine Aerodynamics and Atmospheric Turbulence

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær

    2014-01-01

    In order to design and operate a wind farm optimally it is necessary to know in detail how the wind behaves and interacts with the turbines in a farm. This not only requires knowledge about meteorology, turbulence and aerodynamics, but it also requires access to powerful computers and efficient s...... software. Center for Computational Wind Turbine Aerodynamics and Atmospheric Turbulence was established in 2010 in order to create a world-leading cross-disciplinary flow center that covers all relevant disciplines within wind farm meteorology and aerodynamics.......In order to design and operate a wind farm optimally it is necessary to know in detail how the wind behaves and interacts with the turbines in a farm. This not only requires knowledge about meteorology, turbulence and aerodynamics, but it also requires access to powerful computers and efficient...

  8. Supporting Human Activities - Exploring Activity-Centered Computing

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Bardram, Jakob

    2002-01-01

    In this paper we explore an activity-centered computing paradigm that is aimed at supporting work processes that are radically different from the ones known from office work. Our main inspiration is healthcare work that is characterized by an extreme degree of mobility, many interruptions, ad-hoc...

  9. Barrier-free public transport in the Stuttgart region. Results of a round table with disabled persons on entrance to S-Bahn carriages; Barrierefreiheit im oeffentlichen Personennahverkehr (OePNV) in der Region Stuttgart. Ergebnisse eines Runden Tisches mit Betroffenen zum Problembereich Fahrzeugzugang bei S-Bahnen

    Energy Technology Data Exchange (ETDEWEB)

    Pauls, K. (ed.)

    2001-06-01

    On July, 18{sup th}, 2000, the Centre of Technology Assessment in Baden-Wuerttemberg invited representatives of organisations for the disabled to take part in a round table in order to discuss barrierfree public transport in the region of Stuttgart with Dr. Witgar Weber, vice regional director of the Verband Region Stuttgart and Mr. Nils Himmelmann, representative of Switch Transit Consult. In the first lecture, Dr. Witgar Weber explained tasks of the Verband Region Stuttgart concerning public transport, especially the S-Bahn. Mr. Nils Himmelmann explained the work he is doing on instruction for the Verband Region Stuttgart, at finding solutions to reduce the problem of a vertical and horizontal interval at the entrance of S-Bahn carriages. As a result of the round table, a profile of requirements was compiled by participants, including preferences for a barrierfree use of the S-Bahn. For the S-Bahn system they demand a partial raise of platforms and where a static raise is not possible there should be a dynamic partial raise. This measure should be combined with an extending step at the entrances of the S-Bahn to solve both horizontal and vertical intervals. Further, the integration of disabled and handicapped people in planning is essential at an early stage. Not only for new projects but also for the modifications of trails or carriages. Therewith, concrete problems can be identified immediately which leads to a reduction of costs in realising requirements for a barrierfree public transport instead of very expensive modifications afterwards. (orig.) [German] Auf Einladung der Akademie fuer Technikfolgenabschaetzung in Baden-Wuerttemberg kamen am 18.07.00 Vertreterinnen und Vertreter von Behindertenverbaenden zu einem Runden Tisch, um das Thema, Barrierefreiheit im OePNV in der Region Stuttgart' zusammen mit Dr. Witgar Weber, stellvertretender Regionaldirektor beim Verband Region Stuttgart (VRS), und Nils Himmelmann von der Firma Switch Transit Consult zu

  10. Argonne Laboratory Computing Resource Center - FY2004 Report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R.

    2005-04-14

    In the spring of 2002, Argonne National Laboratory founded the Laboratory Computing Resource Center, and in April 2003 LCRC began full operations with Argonne's first teraflops computing cluster. The LCRC's driving mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting application use and development. This report describes the scientific activities, computing facilities, and usage in the first eighteen months of LCRC operation. In this short time LCRC has had broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. Steering for LCRC comes from the Computational Science Advisory Committee, composed of computing experts from many Laboratory divisions. The CSAC Allocations Committee makes decisions on individual project allocations for Jazz.

  11. Working Group 'Air pollution abatement' of the University of Stuttgart -ALS. Annual report 1990

    International Nuclear Information System (INIS)

    1991-01-01

    Despite considerable efforts for air pollution abatement - examples are here desulphurization and nitrogen removal in power and large combustion plants as well as catalytic converters for automobiles there are still many problems to solve. Many small and medium-size companies still have to reduce production-related pollutant emissions, traffic still is a major source of pollutants. Air pollution abatement in the new Federal states and other Eastern European countries is a particularly urgent task and reductions of CO 2 emissions from energy production processes with fossil fuels are not least a great challenge. Apart from industry, legislation and administration especially science is called upon to find solutions to these problems. The university of Stuttgart takes up the challenge. Numerous institutes - 17 of 8 faculties -united in the working group ''air pollution abatement'' of the university of Stuttgart which carries out in interdisciplinary cooperation research work in the area of air pollution abatement. In this annual report activities of individual member states institutes in the area of air pollution abatement (fields of study, current research projects, cooperations and publications in 1991) as well as joint projects are presented. (orig./KW) [de

  12. Cloud Computing in Science and Engineering and the “SciShop.ru” Computer Simulation Center

    Directory of Open Access Journals (Sweden)

    E. V. Vorozhtsov

    2011-12-01

    Full Text Available Various aspects of cloud computing applications for scientific research, applied design, and remote education are described in this paper. An analysis of the different aspects is performed based on the experience from the “SciShop.ru” Computer Simulation Center. This analysis shows that cloud computing technology has wide prospects in scientific research applications, applied developments and also remote education of specialists, postgraduates, and students.

  13. The Computational Physics Program of the national MFE Computer Center

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1989-01-01

    Since June 1974, the MFE Computer Center has been engaged in a significant computational physics effort. The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generations of supercomputers. The Computational Physics Group has been involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to tokamaks and compact toroids. A third area is the investigation of kinetic instabilities using a 3-D particle code; this work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence have been under examination, with the hope of being able to explain anomalous transport. Also, we are collaborating in an international effort to evaluate fully three-dimensional linear stability of toroidal devices. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers. A summary of these programs are included in this paper. 6 tabs

  14. Computational geometry lectures at the morningside center of mathematics

    CERN Document Server

    Wang, Ren-Hong

    2003-01-01

    Computational geometry is a borderline subject related to pure and applied mathematics, computer science, and engineering. The book contains articles on various topics in computational geometry, which are based on invited lectures and some contributed papers presented by researchers working during the program on Computational Geometry at the Morningside Center of Mathematics of the Chinese Academy of Science. The opening article by R.-H. Wang gives a nice survey of various aspects of computational geometry, many of which are discussed in more detail in other papers in the volume. The topics include problems of optimal triangulation, splines, data interpolation, problems of curve and surface design, problems of shape control, quantum teleportation, and others.

  15. Incorporating modern OpenGL into computer graphics education.

    Science.gov (United States)

    Reina, Guido; Muller, Thomas; Ertl, Thomas

    2014-01-01

    University of Stuttgart educators have updated three computer science courses to incorporate forward-compatible OpenGL. To help students, they developed an educational framework that abstracts some of modern OpenGL's difficult aspects.

  16. A Descriptive Study towards Green Computing Practice Application for Data Centers in IT Based Industries

    Directory of Open Access Journals (Sweden)

    Anthony Jnr. Bokolo

    2018-01-01

    Full Text Available The progressive upsurge in demand for processing and computing power has led to a subsequent upsurge in data center carbon emissions, cost incurred, unethical waste management, depletion of natural resources and high energy utilization. This raises the issue of the sustainability attainment in data centers of Information Technology (IT based industries. Green computing practice can be applied to facilitate sustainability attainment as IT based industries utilizes data centers to provide services to staffs, practitioners and end users. But it is a known fact that enterprise servers utilize huge quantity of energy and incur other expenditures in cooling operations and it is difficult to address the needs of accuracy and efficiency in data centers while yet encouraging a greener application practice alongside cost reduction. Thus this research study focus on the practice application of Green computing in data centers which houses servers and as such presents the Green computing life cycle strategies and best practices to be practiced for better management in data centers in IT based industries. Data was collected through questionnaire from 133 respondents in industries that currently operate their in-house data centers. The analysed data was used to verify the Green computing life cycle strategies presented in this study. Findings from the data shows that each of the life cycles strategies is significant in assisting IT based industries apply Green computing practices in their data centers. This study would be of interest to knowledge and data management practitioners as well as environmental manager and academicians in deploying Green data centers in their organizations.

  17. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Directory of Open Access Journals (Sweden)

    Christopher Bergmeir

    2012-01-01

    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  18. New computer system for the Japan Tier-2 center

    CERN Multimedia

    Hiroyuki Matsunaga

    2007-01-01

    The ICEPP (International Center for Elementary Particle Physics) of the University of Tokyo has been operating an LCG Tier-2 center dedicated to the ATLAS experiment, and is going to switch over to the new production system which has been recently installed. The system will be of great help to the exciting physics analyses for coming years. The new computer system includes brand-new blade servers, RAID disks, a tape library system and Ethernet switches. The blade server is DELL PowerEdge 1955 which contains two Intel dual-core Xeon (WoodCrest) CPUs running at 3GHz, and a total of 650 servers will be used as compute nodes. Each of the RAID disks is configured to be RAID-6 with 16 Serial ATA HDDs. The equipment as well as the cooling system is placed in a new large computer room, and both are hooked up to UPS (uninterruptible power supply) units for stable operation. As a whole, the system has been built with redundant configuration in a cost-effective way. The next major upgrade will take place in thre...

  19. Nuclear energy research in Germany 2008. Research centers and universities

    International Nuclear Information System (INIS)

    Tromm, Walter

    2009-01-01

    This summary report presents nuclear energy research at research centers and universities in Germany in 2008. Activities are explained on the basis of examples of research projects and a description of the situation of research and teaching in general. Participants are the - Karlsruhe Research Center, - Juelich Research Center (FZJ), - Dresden-Rossendorf Research Center (FZD), - Verein fuer Kernverfahrenstechnik und Analytik Rossendorf e.V. (VKTA), - Technical University of Dresden, - University of Applied Sciences, Zittau/Goerlitz, - Institute for Nuclear Energy and Energy Systems (IKE) at the University of Stuttgart, - Reactor Simulation and Reactor Safety Working Group at the Bochum Ruhr University. (orig.)

  20. 6. GESA symposium on experimental stress analysis, May 6 and 7, 1982 Stuttgart

    Energy Technology Data Exchange (ETDEWEB)

    1982-04-01

    Under the scientific chairmanship of Dr. H. Wolf, KWU Muehlheim/Ruhr, the 6th Symposium of the Gemeinschaft Experimentelle Spannungsanalyse (GESA = Experimental Stress Analysis Association) takes place in the Schwabenlandhalle at Fellbach near Stuttgart. The meeting will be organized by VDI/VDE-Gesellschaft Mess- und Regelungstechnik (GMR = VDI/VDE Society for Instrumentation and Control Engineering) located at Duesseldorf. It will be associated with an exposition of firms working in the field of experimental mechanics and presenting among other things developments in the fields of measuring transmitters, data acquisition and processing.

  1. 6. GESA symposium on experimental stress analysis, May 6 and 7, 1982 Stuttgart

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    Under the scientific chairmanship of Dr. H. Wolf, KWU Muehlheim/Ruhr the 6th Symposium of the Gemeinschaft Experimentelle Spannungsanalyse (GESA = Experimental Stress Analysis Association) takes place in the Schwabenlandhalle at Fellbach near Stuttgart. The meeting will be organized by VDI/VDE-Gesellschaft Mess- und Regelungstechnik (GMR = VDI/VDE Society for Instrumentation and Control Engineering) located at Duesseldorf. It will be associated with an exposition of firms working in the field of experimental mechanics and presenting among other things developments in the fields of measuring transmitters, data acquisition and processing. (orig./RW) [de

  2. Conception of a computer for the nuclear medical department of the Augsburg hospital center

    International Nuclear Information System (INIS)

    Graf, G.; Heidenreich, P.

    1984-01-01

    A computer system based on the Siemens R30 process computer has been employed at the Institute of Nuclear Medicine of the Augsburg Hospital Center since early 1981. This system, including the development and testing of organ-specific evaluation programs, was used as a basis for the conception of the new computer system for the department of nuclear medicine of the Augsburg Hospital Center. The computer system was extended and installed according to this conception when the new 1400-bed hospital was opened in the 3rd phase of construction in autumn 1982. (orig.) [de

  3. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    Science.gov (United States)

    Meyer, Jörg; Quadt, Arnulf; Weber, Pavel; ATLAS Collaboration

    2011-12-01

    GoeGrid is a grid resource center located in Göttingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields of grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community, GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and personpower resources.

  4. A multipurpose computing center with distributed resources

    Science.gov (United States)

    Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.

    2017-10-01

    The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.

  5. Applied Computational Fluid Dynamics at NASA Ames Research Center

    Science.gov (United States)

    Holst, Terry L.; Kwak, Dochan (Technical Monitor)

    1994-01-01

    The field of Computational Fluid Dynamics (CFD) has advanced to the point where it can now be used for many applications in fluid mechanics research and aerospace vehicle design. A few applications being explored at NASA Ames Research Center will be presented and discussed. The examples presented will range in speed from hypersonic to low speed incompressible flow applications. Most of the results will be from numerical solutions of the Navier-Stokes or Euler equations in three space dimensions for general geometry applications. Computational results will be used to highlight the presentation as appropriate. Advances in computational facilities including those associated with NASA's CAS (Computational Aerosciences) Project of the Federal HPCC (High Performance Computing and Communications) Program will be discussed. Finally, opportunities for future research will be presented and discussed. All material will be taken from non-sensitive, previously-published and widely-disseminated work.

  6. A Computer Learning Center for Environmental Sciences

    Science.gov (United States)

    Mustard, John F.

    2000-01-01

    In the fall of 1998, MacMillan Hall opened at Brown University to students. In MacMillan Hall was the new Computer Learning Center, since named the EarthLab which was outfitted with high-end workstations and peripherals primarily focused on the use of remotely sensed and other spatial data in the environmental sciences. The NASA grant we received as part of the "Centers of Excellence in Applications of Remote Sensing to Regional and Global Integrated Environmental Assessments" was the primary source of funds to outfit this learning and research center. Since opening, we have expanded the range of learning and research opportunities and integrated a cross-campus network of disciplines who have come together to learn and use spatial data of all kinds. The EarthLab also forms a core of undergraduate, graduate, and faculty research on environmental problems that draw upon the unique perspective of remotely sensed data. Over the last two years, the Earthlab has been a center for research on the environmental impact of water resource use in and regions, impact of the green revolution on forest cover in India, the design of forest preserves in Vietnam, and detailed assessments of the utility of thermal and hyperspectral data for water quality analysis. It has also been used extensively for local environmental activities, in particular studies on the impact of lead on the health of urban children in Rhode Island. Finally, the EarthLab has also served as a key educational and analysis center for activities related to the Brown University Affiliated Research Center that is devoted to transferring university research to the private sector.

  7. Intention and Usage of Computer Based Information Systems in Primary Health Centers

    Science.gov (United States)

    Hosizah; Kuntoro; Basuki N., Hari

    2016-01-01

    The computer-based information system (CBIS) is adopted by almost all of in health care setting, including the primary health center in East Java Province Indonesia. Some of softwares available were SIMPUS, SIMPUSTRONIK, SIKDA Generik, e-puskesmas. Unfortunately they were most of the primary health center did not successfully implemented. This…

  8. CNC Turning Center Advanced Operations. Computer Numerical Control Operator/Programmer. 444-332.

    Science.gov (United States)

    Skowronski, Steven D.; Tatum, Kenneth

    This student guide provides materials for a course designed to introduce the student to the operations and functions of a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 presents course expectations and syllabus, covers safety precautions, and describes the CNC turning center components, CNC…

  9. [The importance of Jewish nursing in World War I as shown by the example of the Jewish nurses' home in Stuttgart].

    Science.gov (United States)

    Ruess, Susanne

    2010-01-01

    The history of Jewish nursing in World War I has so far not been central to medical history research. Rosa Bendit's war diary is still the only source available on the voluntary service Jewish nurses provided during World War I. Their number was small compared to that of nurses in general. Jewish nursing in Germany has hardly been researched. Jewish nurses, like their Christian colleagues, took on wartime nursing tasks voluntarily. This paper will focus on the experiences of the nurses who were sent to various locations in East and West by the Stuttgart Jewish Nurses' Home. Based on quotations from the war diary their position within the medical service will be described, compared and analyzed. The paper draws attention to special characteristics in the comparison ofJewish and Christian nurses and explores issues such as religious observance, religious discrimination, patriotism and differences in the evaluation of the nurses' work. A brief outline of the history of the Stuttgart Jewish Nurses' Home illustrates their working conditions. The Jewish nurses applied themselves with as much effort and devotion as their Christian counterparts. Although there were only few of them, the Jewish nurses managed to establish a recognized position for themselves within the medical service. The history of Jewish nursing in Stuttgart ended in 1941 when the Jewish Nurses' Home was dissolved by the Nazis and four nurses were murdered in concentration camps.

  10. Knowledge management: Role of the the Radiation Safety Information Computational Center (RSICC)

    Science.gov (United States)

    Valentine, Timothy

    2017-09-01

    The Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL) is an information analysis center that collects, archives, evaluates, synthesizes and distributes information, data and codes that are used in various nuclear technology applications. RSICC retains more than 2,000 software packages that have been provided by code developers from various federal and international agencies. RSICC's customers (scientists, engineers, and students from around the world) obtain access to such computing codes (source and/or executable versions) and processed nuclear data files to promote on-going research, to ensure nuclear and radiological safety, and to advance nuclear technology. The role of such information analysis centers is critical for supporting and sustaining nuclear education and training programs both domestically and internationally, as the majority of RSICC's customers are students attending U.S. universities. Additionally, RSICC operates a secure CLOUD computing system to provide access to sensitive export-controlled modeling and simulation (M&S) tools that support both domestic and international activities. This presentation will provide a general review of RSICC's activities, services, and systems that support knowledge management and education and training in the nuclear field.

  11. The Role of the Radiation Safety Information Computational Center (RSICC) in Knowledge Management

    International Nuclear Information System (INIS)

    Valentine, T.

    2016-01-01

    Full text: The Radiation Safety Information Computational Center (RSICC) is an information analysis center that collects, archives, evaluates, synthesizes and distributes information, data and codes that are used in various nuclear technology applications. RSICC retains more than 2,000 packages that have been provided by contributors from various agencies. RSICC’s customers obtain access to such computing codes (source and/or executable versions) and processed nuclear data files to promote on-going research, to help ensure nuclear and radiological safety, and to advance nuclear technology. The role of such information analysis centers is critical for supporting and sustaining nuclear education and training programmes both domestically and internationally, as the majority of RSICC’s customers are students attending U.S. universities. RSICC also supports and promotes workshops and seminars in nuclear science and technology to further the use and/or development of computational tools and data. Additionally, RSICC operates a secure CLOUD computing system to provide access to sensitive export-controlled modeling and simulation (M&S) tools that support both domestic and international activities. This presentation will provide a general review of RSICC’s activities, services, and systems that support knowledge management and education and training in the nuclear field. (author

  12. The Erasmus Computing Grid - Building a Super-Computer Virtually for Free at the Erasmus Medical Center and the Hogeschool Rotterdam

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop

  13. Use of computers and Internet among people with severe mental illnesses at peer support centers.

    Science.gov (United States)

    Brunette, Mary F; Aschbrenner, Kelly A; Ferron, Joelle C; Ustinich, Lee; Kelly, Michael; Grinley, Thomas

    2017-12-01

    Peer support centers are an ideal setting where people with severe mental illnesses can access the Internet via computers for online health education, peer support, and behavioral treatments. The purpose of this study was to assess computer use and Internet access in peer support agencies. A peer-assisted survey assessed the frequency with which consumers in all 13 New Hampshire peer support centers (n = 702) used computers to access Internet resources. During the 30-day survey period, 200 of the 702 peer support consumers (28%) responded to the survey. More than 3 quarters (78.5%) of respondents had gone online to seek information in the past year. About half (49%) of respondents were interested in learning about online forums that would provide information and peer support for mental health issues. Peer support centers may be a useful venue for Web-based approaches to education, peer support, and intervention. Future research should assess facilitators and barriers to use of Web-based resources among people with severe mental illness in peer support centers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 1, Issue 2

    Science.gov (United States)

    2011-01-01

    area and the researchers working on these projects. Also inside: news from the AHPCRC consortium partners at Morgan State University and the NASA ...Computing Research Center is provided by the supercomputing and research facilities at Stanford University and at the NASA Ames Research Center at...atomic and molecular level, he said. He noted that “every general would like to have” a Star Trek -like holodeck, where holographic avatars could

  15. Argonne's Laboratory computing center - 2007 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R.; Pieper, G. W.

    2008-05-28

    Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific

  16. Hydrogen `96: From vision to reality. 11. world hydrogen enery conference in Stuttagrt; Hydrogen `96: Von der Vision zur Realitaet. 11. Welt-Wasserstoffenergie-Konferenz in Stuttgart

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1996-08-01

    More than 700 attendants from 45 countries, 140 technical lectures, 6 plenary lectures, 220 poster presentations, 18 exhibitors from sectors such as high-performance electrolysis for hydrogen generation, fuel cells, hydrogen-powered motor cars and hydrogen filling stations - such is, in a few numbers, the statistics of the Hydrogen `96 Conference. The 11th world conference on energy from hydrogen took place in Stuttgart from 23 to 28 June 1996. The event was opened by Dr. Angela Merkel, the federal German minister for environmental affairs, nature conservancy and reactor safety. The organization of the conference was taken care of by DECHEMA (Deutsche Gesellschaft fuer Chemisches Apparatewesen, Chemische Technik und Biotechnologie), Frankfurt, who were assisted by experienced partners. To mention only a few: Deutsche Forschungsgesellschaft fuer Luft- und Raumfahrt, the VDI Energietechnik Society, the Center for Solar Energy and Hydrogen Research, and the European Federation of Chemical Engineering. (orig.) [Deutsch] Mehr als 700 Teilnehmer aus 45 Laendern, 140 Fachvortraege, 6 Plenarvortraege, 220 Posterpraesentationen, dazu 18 Aussteller aus Bereichen, wie Hochleistungselektrolysen zur Wasserstofferzeugung, Brennstoffzellen, wasserstoffgetriebene Autos und Tankstellen fuer Wasserstoff, das ist, in wenigen Zahlen aufgelistet, die Tagungsstatistik der Hydrogen `96. Die 11. Welt-Wasserstoffenergie-Konferenz fand vom 23. bis 28. Juni 1996 in Stuttgart statt. Eroeffnet wurde die Veranstaltung von der Bundesministerin fuer Umwelt, Naturschutz und Reaktorsicherheit, Dr. Angela Merkel. Die organisatorische Kompetenz lag bei der DECHEMA, der Deutschen Gesellschaft fuer Chemisches Apparatewesen, Chemische Technik und Biotechnologie in Frankfurt, die bei der Vorbereitung der Konferenz auf bewaehrte Partner zurueckgreifen konnte. Genannt seien nur die Deutsche Forschungsgesellschaft fuer Luft- und Raumfahrt, die VDI-Gesellschaft Energietechnik, das Zentrum fuer Sonnenenergie

  17. Vanderbilt University Institute of Imaging Science Center for Computational Imaging XNAT: A multimodal data archive and processing environment.

    Science.gov (United States)

    Harrigan, Robert L; Yvernault, Benjamin C; Boyd, Brian D; Damon, Stephen M; Gibney, Kyla David; Conrad, Benjamin N; Phillips, Nicholas S; Rogers, Baxter P; Gao, Yurui; Landman, Bennett A

    2016-01-01

    The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has developed a database built on XNAT housing over a quarter of a million scans. The database provides framework for (1) rapid prototyping, (2) large scale batch processing of images and (3) scalable project management. The system uses the web-based interfaces of XNAT and REDCap to allow for graphical interaction. A python middleware layer, the Distributed Automation for XNAT (DAX) package, distributes computation across the Vanderbilt Advanced Computing Center for Research and Education high performance computing center. All software are made available in open source for use in combining portable batch scripting (PBS) grids and XNAT servers. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Accurate Computation of Periodic Regions' Centers in the General M-Set with Integer Index Number

    Directory of Open Access Journals (Sweden)

    Wang Xingyuan

    2010-01-01

    Full Text Available This paper presents two methods for accurately computing the periodic regions' centers. One method fits for the general M-sets with integer index number, the other fits for the general M-sets with negative integer index number. Both methods improve the precision of computation by transforming the polynomial equations which determine the periodic regions' centers. We primarily discuss the general M-sets with negative integer index, and analyze the relationship between the number of periodic regions' centers on the principal symmetric axis and in the principal symmetric interior. We can get the centers' coordinates with at least 48 significant digits after the decimal point in both real and imaginary parts by applying the Newton's method to the transformed polynomial equation which determine the periodic regions' centers. In this paper, we list some centers' coordinates of general M-sets' k-periodic regions (k=3,4,5,6 for the index numbers α=−25,−24,…,−1 , all of which have highly numerical accuracy.

  19. Mitigation of urban heat stress – a modelling case study for the area of Stuttgart

    Directory of Open Access Journals (Sweden)

    Fallmann, Joachim

    2014-04-01

    Full Text Available In 2050 the fraction of urban global population will increase to over 69%, which means that around 6.3 billion people are expected to live in urban areas (UN 2011. Cities are the predominant habitation places for humans to live and are vulnerable to extreme weather events aggravating phenomena like heat stress. Finding mitigation strategies to sustain future development is of great importance, given expected influences on human health. In this study, the mesoscale numerical model WRF is used on a regional scale for the urban area of Stuttgart, to simulate the effect of urban planning strategies on dynamical processes affecting urban climate. After comparing two urban parameterisation schemes, a sensitivity study for different scenarios is performed; it shows that a change of the reflective properties of surfaces has the highest impact on near-surface temperatures compared to an increase of urban green areas or a decrease of building density. The Urban Heat Island (UHI describes the temperature difference between urban and rural temperatures; it characterises regional urban climate and is responsible for urban-rural circulation patterns. Applying urban planning measures may decrease the intensity of the UHI in the study area by up to 2 °C by using heat-reflective roof paints or by 1 °C through replacing impervious surfaces by natural vegetation in the urban vicinity – compared to a value of 2.5 °C for the base case. Because of its topographical location in a valley and the overall high temperatures in this region, the area of Stuttgart suffers from heat stress to a comparatively large extent.

  20. The NIRA computer program package (photonuclear data center). Final report

    International Nuclear Information System (INIS)

    Vander Molen, H.J.; Gerstenberg, H.M.

    1976-02-01

    The Photonuclear Data Center's NIRA library of programs, executable from mass storage on the National Bureau of Standard's central computer facility, is described. Detailed instructions are given (with examples) for the use of the library to analyze, evaluate, synthesize, and produce for publication camera-ready tabular and graphical presentations of digital photonuclear reaction cross-section data. NIRA is the acronym for Nuclear Information Research Associate

  1. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    Science.gov (United States)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  2. Argonne's Laboratory Computing Resource Center 2009 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B. (CLS-CI)

    2011-05-13

    Now in its seventh year of operation, the Laboratory Computing Resource Center (LCRC) continues to be an integral component of science and engineering research at Argonne, supporting a diverse portfolio of projects for the U.S. Department of Energy and other sponsors. The LCRC's ongoing mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting high-performance computing application use and development. This report describes scientific activities carried out with LCRC resources in 2009 and the broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. The LCRC Allocations Committee makes decisions on individual project allocations for Jazz. Committee members are appointed by the Associate Laboratory Directors and span a range of computational disciplines. The 350-node LCRC cluster, Jazz, began production service in April 2003 and has been a research work horse ever since. Hosting a wealth of software tools and applications and achieving high availability year after year, researchers can count on Jazz to achieve project milestones and enable breakthroughs. Over the years, many projects have achieved results that would have been unobtainable without such a computing resource. In fiscal year 2009, there were 49 active projects representing a wide cross-section of Laboratory research and almost all research divisions.

  3. Modeling Remote I/O versus Staging Tradeoff in Multi-Data Center Computing

    International Nuclear Information System (INIS)

    Suslu, Ibrahim H

    2014-01-01

    In multi-data center computing, data to be processed is not always local to the computation. This is a major challenge especially for data-intensive Cloud computing applications, since large amount of data would need to be either moved the local sites (staging) or accessed remotely over the network (remote I/O). Cloud application developers generally chose between staging and remote I/O intuitively without making any scientific comparison specific to their application data access patterns since there is no generic model available that they can use. In this paper, we propose a generic model for the Cloud application developers which would help them to choose the most appropriate data access mechanism for their specific application workloads. We define the parameters that potentially affect the end-to-end performance of the multi-data center Cloud applications which need to access large datasets over the network. To test and validate our models, we implemented a series of synthetic benchmark applications to simulate the most common data access patterns encountered in Cloud applications. We show that our model provides promising results in different settings with different parameters, such as network bandwidth, server and client capabilities, and data access ratio

  4. The Stuttgart-Heidelberg Model of Active Feedback Driven Quality Management: Means for the Optimization of Psychotherapy Provision

    Directory of Open Access Journals (Sweden)

    Hans Kordy

    2003-01-01

    Full Text Available La gestión de la calidad busca la evaluación del tratamiento psicoterapéutico. Un aspecto central se relaciona con el desarrollo de baterías de evaluación y criterios de evaluación adecuados. El modelo Stuttgart-Heidelberg (S-H representa un sistema que proporciona conceptos, instrumentos psicométricos y un programa informático desarrollado para la gestión de la calidad basada en el feedback activo. La información central del modelo Stuttgart-Heidelberg es el resultado individual del tratamiento. El planteamiento es que la psicoterapia puede mejorarse si proporcionamos información sobre los resultados terapéuticos (en especial los negativos, pues los procesos de solución de problemas se estimularán por el feedback recibido. El presente trabajo presenta un inventario de evaluación, la evaluación estandarizada de los resultados y las diversas herramientas de feedback del modelo SH. Un estudio sistemático incluyendo 1715 pacientes de un hospital especializado en trastornos psicosomáticos documenta la validez de este abordaje. Los resultados empíricos refuerzan una estrategia de transparencia acerca de lo que acontece en la práctica clínica – por ejemplo, acerca de los tratamientos administrados, sus resultados y costos. Implicaciones para la posterior optimización de los servicios de salud son discutidos.

  5. Argonne's Laboratory computing resource center : 2006 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

    2007-05-31

    Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff

  6. Rational use of energy at the University of Stuttgart building environment. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, F.; Freihofer, H. [Forschungsinst. fuer Kerntechnik und Energiewandlung e.V., Stuttgart (Germany). Abt. Wissensverarbeitung und Numerik; Stergiaropoulos, K.; Claus, G. [Forschungsgesellschaft Heizung-Lueftung-Klimatechnik Stuttgart mbH (Germany); Harter, J.; Ast, H. [IFB, Dr. R. Braschel GmbH, Stuttgart (Germany); Will, M.; Haerther, H.; Franke, H. [Sulzer-Infra Deutschland GmbH, Stuttgart (Germany)

    1999-07-01

    We have demonstrated with the project REUSE that it is possible to optimise complex building ensembles (properties) energetically by applying the contracting model. However, there are some basic requirements which have to be fullfilled to reach such a goal. They include 1. basic consense among those dealing with specific aspects concerning energy use of the buildings considered, 2. transparent and most actual measured data of energy consumption, 3. unified and reliable system for evaluation of measures taken to save energy (base line), 4. partners who are able to define measures which have to be undertaken to save energy in a specific building and who are able to implement these measures effectively and user friendly. This report describes how we have fullfilled these requirements at the campus 'Pfaffenwald' of the University of Stuttgart. Numerous daily life difficulties had to be overcome before the project became a success and we were able to derive valuable consequences. These consequences went far beyond the original goal to save energy worth about 3 to 4 Million Deutsche Marks and finally resulted in a new thinking about energy use at the campus. Therefore, we see the project REUSE as extremely successful and hope it will encourage similar projects and provide valuable hints to them. (orig.) [German] Im Vorhaben REUSE wurde gezeigt, dass und wie es moeglich ist, komplexe Liegenschaften nach dem Contracting Modell energetisch zu optimieren. Voraussetzungen dafuer sind: 1. ein gemeinsames Basisverstaendnis all derer mit Energiefragen befassten, 2. eine transparente und zeitnahe Erfassung der Verbraeuche, 3. ein einheitliches und verlaessliches System zur Bewertung der Massnahmen zur Energieeinsparung (Baseline) und 4. Partner, die Massnahmen an Einzelobjekten effektiv und kundenfreundlich umsetzen koennen. Dieser Bericht zeigt auf, wie wir an der Liegenschaft CAMPUS Pfaffenwald der Universitaet Stuttgart diese Voraussetzungen geschaffen haben, wie die

  7. The Role of Computers in Research and Development at Langley Research Center

    Science.gov (United States)

    Wieseman, Carol D. (Compiler)

    1994-01-01

    This document is a compilation of presentations given at a workshop on the role cf computers in research and development at the Langley Research Center. The objectives of the workshop were to inform the Langley Research Center community of the current software systems and software practices in use at Langley. The workshop was organized in 10 sessions: Software Engineering; Software Engineering Standards, methods, and CASE tools; Solutions of Equations; Automatic Differentiation; Mosaic and the World Wide Web; Graphics and Image Processing; System Design Integration; CAE Tools; Languages; and Advanced Topics.

  8. Computer-aided dispatch--traffic management center field operational test : Washington State final report

    Science.gov (United States)

    2006-05-01

    This document provides the final report for the evaluation of the USDOT-sponsored Computer-Aided Dispatch - Traffic Management Center Integration Field Operations Test in the State of Washington. The document discusses evaluation findings in the foll...

  9. Research and development of grid computing technology in center for computational science and e-systems of Japan Atomic Energy Agency

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2007-01-01

    Center for Computational Science and E-systems of the Japan Atomic Energy Agency (CCSE/JAEA) has carried out R and D of grid computing technology. Since 1995, R and D to realize computational assistance for researchers called Seamless Thinking Aid (STA) and then to share intellectual resources called Information Technology Based Laboratory (ITBL) have been conducted, leading to construct an intelligent infrastructure for the atomic energy research called Atomic Energy Grid InfraStructure (AEGIS) under the Japanese national project 'Development and Applications of Advanced High-Performance Supercomputer'. It aims to enable synchronization of three themes: 1) Computer-Aided Research and Development (CARD) to realize and environment for STA, 2) Computer-Aided Engineering (CAEN) to establish Multi Experimental Tools (MEXT), and 3) Computer Aided Science (CASC) to promote the Atomic Energy Research and Investigation (AERI). This article reviewed achievements in R and D of grid computing technology so far obtained. (T. Tanaka)

  10. Secure data exchange between intelligent devices and computing centers

    Science.gov (United States)

    Naqvi, Syed; Riguidel, Michel

    2005-03-01

    The advent of reliable spontaneous networking technologies (commonly known as wireless ad-hoc networks) has ostensibly raised stakes for the conception of computing intensive environments using intelligent devices as their interface with the external world. These smart devices are used as data gateways for the computing units. These devices are employed in highly volatile environments where the secure exchange of data between these devices and their computing centers is of paramount importance. Moreover, their mission critical applications require dependable measures against the attacks like denial of service (DoS), eavesdropping, masquerading, etc. In this paper, we propose a mechanism to assure reliable data exchange between an intelligent environment composed of smart devices and distributed computing units collectively called 'computational grid'. The notion of infosphere is used to define a digital space made up of a persistent and a volatile asset in an often indefinite geographical space. We study different infospheres and present general evolutions and issues in the security of such technology-rich and intelligent environments. It is beyond any doubt that these environments will likely face a proliferation of users, applications, networked devices, and their interactions on a scale never experienced before. It would be better to build in the ability to uniformly deal with these systems. As a solution, we propose a concept of virtualization of security services. We try to solve the difficult problems of implementation and maintenance of trust on the one hand, and those of security management in heterogeneous infrastructure on the other hand.

  11. Argonne's Laboratory Computing Resource Center : 2005 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

    2007-06-30

    Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure

  12. Information and psychomotor skills knowledge acquisition: A student-customer-centered and computer-supported approach.

    Science.gov (United States)

    Nicholson, Anita; Tobin, Mary

    2006-01-01

    This presentation will discuss coupling commercial and customized computer-supported teaching aids to provide BSN nursing students with a friendly customer-centered self-study approach to psychomotor skill acquisition.

  13. Computer Vision Syndrome among Call Center Employees at Telecommunication Company in Bandung

    Directory of Open Access Journals (Sweden)

    Ghea Nursyifa

    2016-06-01

    Full Text Available Background: The occurrence of Computer Vision Syndrome (CVS at the workplace has increased within decades due to theprolonged use of computers. Knowledge of CVS is necessary in order to develop an awareness of how to prevent and alleviate itsprevalence . The objective of this study was to assess the knowledge of CVS among call center employees and to explore the most frequent CVS symptom experienced by the workers. Methods: A descriptive cross sectional study was conducted during the period of September to November 2014 at Telecommunication Company in Bandung using a questionnaire consisting of 30 questions. Out of the 30 questions/statements, 15 statements were about knowledge of CVS and other 15 questions were about the occurrence of CVS and its symptoms. In this study 125 call center employees participated as respondents using consecutive sampling. The level of knowledge was divided into 3 categories: good (76–100%, fair (75–56% and poor (<56%. The collected data was presented in frequency tabulation. Results: There was 74.4% of the respondents had poor knowledge of CVS. The most symptom experienced by the respondents was asthenopia. Conclusions: The CVS occurs in call center employees with various symptoms and signs. This situation is not supported by good knowledge of the syndrome which can hamper prevention programs.

  14. Current state and future direction of computer systems at NASA Langley Research Center

    Science.gov (United States)

    Rogers, James L. (Editor); Tucker, Jerry H. (Editor)

    1992-01-01

    Computer systems have advanced at a rate unmatched by any other area of technology. As performance has dramatically increased there has been an equally dramatic reduction in cost. This constant cost performance improvement has precipitated the pervasiveness of computer systems into virtually all areas of technology. This improvement is due primarily to advances in microelectronics. Most people are now convinced that the new generation of supercomputers will be built using a large number (possibly thousands) of high performance microprocessors. Although the spectacular improvements in computer systems have come about because of these hardware advances, there has also been a steady improvement in software techniques. In an effort to understand how these hardware and software advances will effect research at NASA LaRC, the Computer Systems Technical Committee drafted this white paper to examine the current state and possible future directions of computer systems at the Center. This paper discusses selected important areas of computer systems including real-time systems, embedded systems, high performance computing, distributed computing networks, data acquisition systems, artificial intelligence, and visualization.

  15. Computer-aided dispatch--traffic management center field operational test : state of Utah final report

    Science.gov (United States)

    2006-07-01

    This document provides the final report for the evaluation of the USDOT-sponsored Computer-Aided Dispatch Traffic Management Center Integration Field Operations Test in the State of Utah. The document discusses evaluation findings in the followin...

  16. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    Science.gov (United States)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  17. Rib Radiography versus Chest Computed Tomography in the Diagnosis of Rib Fractures.

    Science.gov (United States)

    Sano, Atsushi

    2018-05-01

     The accurate diagnosis of rib fractures is important in chest trauma. Diagnostic images following chest trauma are usually obtained via chest X-ray, chest computed tomography, or rib radiography. This study evaluated the diagnostic characteristics of rib radiography and chest computed tomography.  Seventy-five rib fracture patients who underwent both chest computed tomography and rib radiography between April 2008 and December 2013 were included. Rib radiographs, centered on the site of pain, were taken from two directions. Chest computed tomography was performed using a 16-row multidetector scanner with 5-mm slice-pitch without overlap, and axial images were visualized in a bone window.  In total, 217 rib fractures were diagnosed in 75 patients. Rib radiography missed 43 rib fractures in 24 patients. The causes were overlap with organs in 15 cases, trivial fractures in 21 cases, and injury outside the imaging range in 7 cases. Left lower rib fractures were often missed due to overlap with the heart, while middle and lower rib fractures were frequently not diagnosed due to overlap with abdominal organs. Computed tomography missed 21 rib fractures in 17 patients. The causes were horizontal fractures in 10 cases, trivial fractures in 9 cases, and insufficient breath holding in 1 case.  In rib radiography, overlap with organs and fractures outside the imaging range were characteristic reasons for missed diagnoses. In chest computed tomography, horizontal rib fractures and insufficient breath holding were often responsible. We should take these challenges into account when diagnosing rib fractures. Georg Thieme Verlag KG Stuttgart · New York.

  18. Initial constructs for patient-centered outcome measures to evaluate brain-computer interfaces.

    Science.gov (United States)

    Andresen, Elena M; Fried-Oken, Melanie; Peters, Betts; Patrick, Donald L

    2016-10-01

    The authors describe preliminary work toward the creation of patient-centered outcome (PCO) measures to evaluate brain-computer interface (BCI) as an assistive technology (AT) for individuals with severe speech and physical impairments (SSPI). In Phase 1, 591 items from 15 existing measures were mapped to the International Classification of Functioning, Disability and Health (ICF). In Phase 2, qualitative interviews were conducted with eight people with SSPI and seven caregivers. Resulting text data were coded in an iterative analysis. Most items (79%) were mapped to the ICF environmental domain; over half (53%) were mapped to more than one domain. The ICF framework was well suited for mapping items related to body functions and structures, but less so for items in other areas, including personal factors. Two constructs emerged from qualitative data: quality of life (QOL) and AT. Component domains and themes were identified for each. Preliminary constructs, domains and themes were generated for future PCO measures relevant to BCI. Existing instruments are sufficient for initial items but do not adequately match the values of people with SSPI and their caregivers. Field methods for interviewing people with SSPI were successful, and support the inclusion of these individuals in PCO research. Implications for Rehabilitation Adapted interview methods allow people with severe speech and physical impairments to participate in patient-centered outcomes research. Patient-centered outcome measures are needed to evaluate the clinical implementation of brain-computer interface as an assistive technology.

  19. Computed tomography-guided core-needle biopsy of lung lesions: an oncology center experience

    Energy Technology Data Exchange (ETDEWEB)

    Guimaraes, Marcos Duarte; Fonte, Alexandre Calabria da; Chojniak, Rubens, E-mail: marcosduarte@yahoo.com.b [Hospital A.C. Camargo, Sao Paulo, SP (Brazil). Dept. of Radiology and Imaging Diagnosis; Andrade, Marcony Queiroz de [Hospital Alianca, Salvador, BA (Brazil); Gross, Jefferson Luiz [Hospital A.C. Camargo, Sao Paulo, SP (Brazil). Dept. of Chest Surgery

    2011-03-15

    Objective: The present study is aimed at describing the experience of an oncology center with computed tomography guided core-needle biopsy of pulmonary lesions. Materials and Methods: Retrospective analysis of 97 computed tomography-guided core-needle biopsy of pulmonary lesions performed in the period between 1996 and 2004 in a Brazilian reference oncology center (Hospital do Cancer - A.C. Camargo). Information regarding material appropriateness and the specific diagnoses were collected and analyzed. Results: Among 97 lung biopsies, 94 (96.9%) supplied appropriate specimens for histological analyses, with 71 (73.2%) cases being diagnosed as malignant lesions and 23 (23.7%) diagnosed as benign lesions. Specimens were inappropriate for analysis in three cases. The frequency of specific diagnosis was 83 (85.6%) cases, with high rates for both malignant lesions with 63 (88.7%) cases and benign lesions with 20 (86.7%). As regards complications, a total of 12 cases were observed as follows: 7 (7.2%) cases of hematoma, 3 (3.1%) cases of pneumothorax and 2 (2.1%) cases of hemoptysis. Conclusion: Computed tomography-guided core needle biopsy of lung lesions demonstrated high rates of material appropriateness and diagnostic specificity, and low rates of complications in the present study. (author)

  20. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  1. The effective use of virtualization for selection of data centers in a cloud computing environment

    Science.gov (United States)

    Kumar, B. Santhosh; Parthiban, Latha

    2018-04-01

    Data centers are the places which consist of network of remote servers to store, access and process the data. Cloud computing is a technology where users worldwide will submit the tasks and the service providers will direct the requests to the data centers which are responsible for execution of tasks. The servers in the data centers need to employ the virtualization concept so that multiple tasks can be executed simultaneously. In this paper we proposed an algorithm for data center selection based on energy of virtual machines created in server. The virtualization energy in each of the server is calculated and total energy of the data center is obtained by the summation of individual server energy. The tasks submitted are routed to the data center with least energy consumption which will result in minimizing the operational expenses of a service provider.

  2. Impact of configuration management system of computer center on support of scientific projects throughout their lifecycle

    International Nuclear Information System (INIS)

    Bogdanov, A.V.; Yuzhanin, N.V.; Zolotarev, V.I.; Ezhakova, T.R.

    2017-01-01

    In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is reviewed and development of the corresponding elements of the system is described in the present paper.

  3. The psychology of computer displays in the modern mission control center

    Science.gov (United States)

    Granaas, Michael M.; Rhea, Donald C.

    1988-01-01

    Work at NASA's Western Aeronautical Test Range (WATR) has demonstrated the need for increased consideration of psychological factors in the design of computer displays for the WATR mission control center. These factors include color perception, memory load, and cognitive processing abilities. A review of relevant work in the human factors psychology area is provided to demonstrate the need for this awareness. The information provided should be relevant in control room settings where computerized displays are being used.

  4. Computer-aided dispatch--traffic management center field operational test final detailed test plan : WSDOT deployment

    Science.gov (United States)

    2003-10-01

    The purpose of this document is to expand upon the evaluation components presented in "Computer-aided dispatch--traffic management center field operational test final evaluation plan : WSDOT deployment". This document defines the objective, approach,...

  5. Impact of configuration management system of computer center on support of scientific projects throughout their lifecycle

    Science.gov (United States)

    Bogdanov, A. V.; Iuzhanin, N. V.; Zolotarev, V. I.; Ezhakova, T. R.

    2017-12-01

    In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is being reviewed and development of the corresponding elements of the system is being described in the present paper.

  6. Building a Prototype of LHC Analysis Oriented Computing Centers

    Science.gov (United States)

    Bagliesi, G.; Boccali, T.; Della Ricca, G.; Donvito, G.; Paganoni, M.

    2012-12-01

    A Consortium between four LHC Computing Centers (Bari, Milano, Pisa and Trieste) has been formed in 2010 to prototype Analysis-oriented facilities for CMS data analysis, profiting from a grant from the Italian Ministry of Research. The Consortium aims to realize an ad-hoc infrastructure to ease the analysis activities on the huge data set collected at the LHC Collider. While “Tier2” Computing Centres, specialized in organized processing tasks like Monte Carlo simulation, are nowadays a well established concept, with years of running experience, site specialized towards end user chaotic analysis activities do not yet have a defacto standard implementation. In our effort, we focus on all the aspects that can make the analysis tasks easier for a physics user not expert in computing. On the storage side, we are experimenting on storage techniques allowing for remote data access and on storage optimization on the typical analysis access patterns. On the networking side, we are studying the differences between flat and tiered LAN architecture, also using virtual partitioning of the same physical networking for the different use patterns. Finally, on the user side, we are developing tools and instruments to allow for an exhaustive monitoring of their processes at the site, and for an efficient support system in case of problems. We will report about the results of the test executed on different subsystem and give a description of the layout of the infrastructure in place at the site participating to the consortium.

  7. Building a Prototype of LHC Analysis Oriented Computing Centers

    International Nuclear Information System (INIS)

    Bagliesi, G; Boccali, T; Della Ricca, G; Donvito, G; Paganoni, M

    2012-01-01

    A Consortium between four LHC Computing Centers (Bari, Milano, Pisa and Trieste) has been formed in 2010 to prototype Analysis-oriented facilities for CMS data analysis, profiting from a grant from the Italian Ministry of Research. The Consortium aims to realize an ad-hoc infrastructure to ease the analysis activities on the huge data set collected at the LHC Collider. While “Tier2” Computing Centres, specialized in organized processing tasks like Monte Carlo simulation, are nowadays a well established concept, with years of running experience, site specialized towards end user chaotic analysis activities do not yet have a defacto standard implementation. In our effort, we focus on all the aspects that can make the analysis tasks easier for a physics user not expert in computing. On the storage side, we are experimenting on storage techniques allowing for remote data access and on storage optimization on the typical analysis access patterns. On the networking side, we are studying the differences between flat and tiered LAN architecture, also using virtual partitioning of the same physical networking for the different use patterns. Finally, on the user side, we are developing tools and instruments to allow for an exhaustive monitoring of their processes at the site, and for an efficient support system in case of problems. We will report about the results of the test executed on different subsystem and give a description of the layout of the infrastructure in place at the site participating to the consortium.

  8. Computer-aided dispatch--traffic management center field operational test final test plans : state of Utah

    Science.gov (United States)

    2004-01-01

    The purpose of this document is to expand upon the evaluation components presented in "Computer-aided dispatch--traffic management center field operational test final evaluation plan : state of Utah". This document defines the objective, approach, an...

  9. New developments in delivering public access to data from the National Center for Computational Toxicology at the EPA

    Science.gov (United States)

    Researchers at EPA’s National Center for Computational Toxicology integrate advances in biology, chemistry, and computer science to examine the toxicity of chemicals and help prioritize chemicals for further research based on potential human health risks. The goal of this researc...

  10. CENTER CONDITIONS AND CYCLICITY FOR A FAMILY OF CUBIC SYSTEMS: COMPUTER ALGEBRA APPROACH.

    Science.gov (United States)

    Ferčec, Brigita; Mahdi, Adam

    2013-01-01

    Using methods of computational algebra we obtain an upper bound for the cyclicity of a family of cubic systems. We overcame the problem of nonradicality of the associated Bautin ideal by moving from the ring of polynomials to a coordinate ring. Finally, we determine the number of limit cycles bifurcating from each component of the center variety.

  11. Technical Data Management Center: a focal point for meteorological and other environmental transport computing technology

    International Nuclear Information System (INIS)

    McGill, B.; Maskewitz, B.F.; Trubey, D.K.

    1981-01-01

    The Technical Data Management Center, collecting, packaging, analyzing, and distributing information, computer technology and data which includes meteorological and other environmental transport work is located at the Oak Ridge National Laboratory, within the Engineering Physics Division. Major activities include maintaining a collection of computing technology and associated literature citations to provide capabilities for meteorological and environmental work. Details of the activities on behalf of TDMC's sponsoring agency, the US Nuclear Regulatory Commission, are described

  12. A research update for the Stuttgart National Aquaculture Research Center

    Science.gov (United States)

    Aquaculture (fish farming) has played an ever-increasing role in providing people with fish, shrimp, and shellfish. Aquaculture is currently the fastest growing sector of global food production and in 2016 totaled 90 million tons valued at $180 billion. The production of food-fish from aquaculture...

  13. Bridging the digital divide by increasing computer and cancer literacy: community technology centers for head-start parents and families.

    Science.gov (United States)

    Salovey, Peter; Williams-Piehota, Pamela; Mowad, Linda; Moret, Marta Elisa; Edlund, Denielle; Andersen, Judith

    2009-01-01

    This article describes the establishment of two community technology centers affiliated with Head Start early childhood education programs focused especially on Latino and African American parents of children enrolled in Head Start. A 6-hour course concerned with computer and cancer literacy was presented to 120 parents and other community residents who earned a free, refurbished, Internet-ready computer after completing the program. Focus groups provided the basis for designing the structure and content of the course and modifying it during the project period. An outcomes-based assessment comparing program participants with 70 nonparticipants at baseline, immediately after the course ended, and 3 months later suggested that the program increased knowledge about computers and their use, knowledge about cancer and its prevention, and computer use including health information-seeking via the Internet. The creation of community computer technology centers requires the availability of secure space, capacity of a community partner to oversee project implementation, and resources of this partner to ensure sustainability beyond core funding.

  14. Universitätsbibliografie mit PUMA. Praxisbericht aus der Einführung der Universitätsbibliografie an der Universitätsbibliothek Stuttgart

    Directory of Open Access Journals (Sweden)

    Sibylle Hermann

    2017-12-01

    The added value of the Academic Publication Management PUMA at the University of Stuttgart compared to a bibliographic cataloguing environment lies in the direct processing of data by scientists without an interim step via the library. For the university content management system, a plugin is available, which dynamically integrates publication lists into employees’ web pages. The metadata is loaded directly from PUMA, filtered, sorted and can be put out in the citation style desired. PUMA offers many interfaces and display possibilities.

  15. Examining the Fundamental Obstructs of Adopting Cloud Computing for 9-1-1 Dispatch Centers in the USA

    Science.gov (United States)

    Osman, Abdulaziz

    2016-01-01

    The purpose of this research study was to examine the unknown fears of embracing cloud computing which stretches across measurements like fear of change from leaders and the complexity of the technology in 9-1-1 dispatch centers in USA. The problem that was addressed in the study was that many 9-1-1 dispatch centers in USA are still using old…

  16. Teaching Scientific Computing: A Model-Centered Approach to Pipeline and Parallel Programming with C

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2015-01-01

    Full Text Available The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same time, the topic is among the most motivating tasks due to the comprehensive multidisciplinary and technical requirements. To enhance the educational process, the paper proposes a novel model-centered framework and develops the relevant learning objects. It allows implementing an educational platform of constructivist learning process, thus enabling learners’ experimentation with the provided programming models, obtaining learners’ competences of the modern scientific research and computational thinking, and capturing the relevant technical knowledge. It also provides an integral platform that allows a simultaneous and comparative introduction to pipelining and parallel computing. The programming language C for developing programming models and message passing interface (MPI and OpenMP parallelization tools have been chosen for implementation.

  17. HYPERSPECTRAL REMOTE SENSING WITH THE UAS "STUTTGARTER ADLER" – CHALLENGES, EXPERIENCES AND FIRST RESULTS

    Directory of Open Access Journals (Sweden)

    A. Buettner

    2013-08-01

    Full Text Available The UAS "Stuttgarter Adler" was designed as a flexible and cost-effective remote-sensing platform for acquisition of high quality environmental data. Different missions for precision agriculture applications and BRDF-research have been successfully performed with a multispectral camera system and a spectrometer as main payloads. Currently, an imaging spectrometer is integrated in the UAS as a new payload, which enables the recording of hyperspectral data in more than 200 spectral bands in the visible and near infrared spectrum. The recording principle of the hyperspectral instrument is based on a line scanner. Each line is stored as a matrix image with spectral information in one axis and spatial information in the other axis of the image. Besides a detailed specification of the system concept and instrument design, the calibration procedure of the hyperspectral sensor system is discussed and results of the laboratory calibration are presented. The complete processing chain of measurement data is described and first results of measurement-flights over agricultural test sites are presented.

  18. Annual report of R and D activities in Center for Promotion of Computational Science and Engineering and Center for Computational Science and e-Systems from April 1, 2005 to March 31, 2006

    International Nuclear Information System (INIS)

    2007-03-01

    This report provides an overview of research and development activities in Center for Computational Science and Engineering (CCSE), JAERI in the former half of the fiscal year 2005 (April 1, 2005 - Sep. 30, 2006) and those in Center for Computational Science and e-Systems (CCSE), JAEA, in the latter half of the fiscal year 2005(Oct 1, 2005 - March 31, 2006). In the former half term, the activities have been performed by 5 research groups, Research Group for Computational Science in Atomic Energy, Research Group for Computational Material Science in Atomic Energy, R and D Group for Computer Science, R and D Group for Numerical Experiments, and Quantum Bioinformatics Group in CCSE. At the beginning of the latter half term, these 5 groups were integrated into two offices, Simulation Technology Research and Development Office and Computer Science Research and Development Office at the moment of the unification of JNC (Japan Nuclear Cycle Development Institute) and JAERI (Japan Atomic Energy Research Institute), and the latter-half term activities were operated by the two offices. A big project, ITBL (Information Technology Based Laboratory) project and fundamental computational research for atomic energy plant were performed mainly by two groups, the R and D Group for Computer Science and the Research Group for Computational Science in Atomic Energy in the former half term and their integrated office, Computer Science Research and Development Office in the latter half one, respectively. The main result was verification by using structure analysis for real plant executable on the Grid environment, and received Honorable Mentions of Analytic Challenge in the conference 'Supercomputing (SC05)'. The materials science and bioinformatics in atomic energy research field were carried out by three groups, Research Group for Computational Material Science in Atomic Energy, R and D Group for Computer Science, R and D Group for Numerical Experiments, and Quantum Bioinformatics

  19. CNC Turning Center Operations and Prove Out. Computer Numerical Control Operator/Programmer. 444-334.

    Science.gov (United States)

    Skowronski, Steven D.

    This student guide provides materials for a course designed to instruct the student in the recommended procedures used when setting up tooling and verifying part programs for a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 discusses course content and reviews and demonstrates set-up procedures…

  20. Radiation Shielding Information Center: a source of computer codes and data for fusion neutronics studies

    International Nuclear Information System (INIS)

    McGill, B.L.; Roussin, R.W.; Trubey, D.K.; Maskewitz, B.F.

    1980-01-01

    The Radiation Shielding Information Center (RSIC), established in 1962 to collect, package, analyze, and disseminate information, computer codes, and data in the area of radiation transport related to fission, is now being utilized to support fusion neutronics technology. The major activities include: (1) answering technical inquiries on radiation transport problems, (2) collecting, packaging, testing, and disseminating computing technology and data libraries, and (3) reviewing literature and operating a computer-based information retrieval system containing material pertinent to radiation transport analysis. The computer codes emphasize methods for solving the Boltzmann equation such as the discrete ordinates and Monte Carlo techniques, both of which are widely used in fusion neutronics. The data packages include multigroup coupled neutron-gamma-ray cross sections and kerma coefficients, other nuclear data, and radiation transport benchmark problem results

  1. Polymer waveguides for electro-optical integration in data centers and high-performance computers.

    Science.gov (United States)

    Dangel, Roger; Hofrichter, Jens; Horst, Folkert; Jubin, Daniel; La Porta, Antonio; Meier, Norbert; Soganci, Ibrahim Murat; Weiss, Jonas; Offrein, Bert Jan

    2015-02-23

    To satisfy the intra- and inter-system bandwidth requirements of future data centers and high-performance computers, low-cost low-power high-throughput optical interconnects will become a key enabling technology. To tightly integrate optics with the computing hardware, particularly in the context of CMOS-compatible silicon photonics, optical printed circuit boards using polymer waveguides are considered as a formidable platform. IBM Research has already demonstrated the essential silicon photonics and interconnection building blocks. A remaining challenge is electro-optical packaging, i.e., the connection of the silicon photonics chips with the system. In this paper, we present a new single-mode polymer waveguide technology and a scalable method for building the optical interface between silicon photonics chips and single-mode polymer waveguides.

  2. The Radiation Safety Information Computational Center (RSICC): A Resource for Nuclear Science Applications

    International Nuclear Information System (INIS)

    Kirk, Bernadette Lugue

    2009-01-01

    The Radiation Safety Information Computational Center (RSICC) has been in existence since 1963. RSICC collects, organizes, evaluates and disseminates technical information (software and nuclear data) involving the transport of neutral and charged particle radiation, and shielding and protection from the radiation associated with: nuclear weapons and materials, fission and fusion reactors, outer space, accelerators, medical facilities, and nuclear waste management. RSICC serves over 12,000 scientists and engineers from about 100 countries. An important activity of RSICC is its participation in international efforts on computational and experimental benchmarks. An example is the Shielding Integral Benchmarks Archival Database (SINBAD), which includes shielding benchmarks for fission, fusion and accelerators. RSICC is funded by the United States Department of Energy, Department of Homeland Security and Nuclear Regulatory Commission.

  3. The Radiation Safety Information Computational Center (RSICC): A Resource for Nuclear Science Applications

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL

    2009-01-01

    The Radiation Safety Information Computational Center (RSICC) has been in existence since 1963. RSICC collects, organizes, evaluates and disseminates technical information (software and nuclear data) involving the transport of neutral and charged particle radiation, and shielding and protection from the radiation associated with: nuclear weapons and materials, fission and fusion reactors, outer space, accelerators, medical facilities, and nuclear waste management. RSICC serves over 12,000 scientists and engineers from about 100 countries.

  4. Computer modeling with randomized-controlled trial data informs the development of person-centered aged care homes.

    Science.gov (United States)

    Chenoweth, Lynn; Vickland, Victor; Stein-Parbury, Jane; Jeon, Yun-Hee; Kenny, Patricia; Brodaty, Henry

    2015-10-01

    To answer questions on the essential components (services, operations and resources) of a person-centered aged care home (iHome) using computer simulation. iHome was developed with AnyLogic software using extant study data obtained from 60 Australian aged care homes, 900+ clients and 700+ aged care staff. Bayesian analysis of simulated trial data will determine the influence of different iHome characteristics on care service quality and client outcomes. Interim results: A person-centered aged care home (socio-cultural context) and care/lifestyle services (interactional environment) can produce positive outcomes for aged care clients (subjective experiences) in the simulated environment. Further testing will define essential characteristics of a person-centered care home.

  5. Computer Center: Software Review.

    Science.gov (United States)

    Duhrkopf, Richard, Ed.; Belshe, John F., Ed.

    1988-01-01

    Reviews a software package, "Mitosis-Meiosis," available for Apple II or IBM computers with colorgraphics capabilities. Describes the documentation, presentation and flexibility of the program. Rates the program based on graphics and usability in a biology classroom. (CW)

  6. Computational Physics Program of the National MFE Computer Center

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1984-12-01

    The principal objective of the computational physics group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. A summary of the groups activities is presented, including computational studies in MHD equilibria and stability, plasma transport, Fokker-Planck, and efficient numerical and programming algorithms. References are included

  7. Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    OpenAIRE

    Buyya, Rajkumar; Beloglazov, Anton; Abawajy, Jemal

    2010-01-01

    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational cos...

  8. M-center growth in alkali halides: computer simulation

    International Nuclear Information System (INIS)

    Aguilar, M.; Jaque, F.; Agullo-Lopez, F.

    1983-01-01

    The heterogeneous interstitial nucleation model previously proposed to explain F-center growth curves in irradiated alkali halides has been extended to account for M-center kinetics. The interstitials produced during the primary irradiation event are assumed to be trapped at impurities and interstitial clusters or recombine with F and M centers. For M-center formation two cases have been considered: (a) diffusion and aggregation of F centers, and (b) statistical generation and pairing of F centers. Process (b) is the only one consistent with the quadratic relationship between M and F center concentrations. However, to account for the F/M ratios experimentally observed as well as for the role of dose-rate, a modified statistical model involving random creation and association of F + -F pairs has been shown to be adequate. (author)

  9. Bioinformatics and Computational Core Technology Center

    Data.gov (United States)

    Federal Laboratory Consortium — SERVICES PROVIDED BY THE COMPUTER CORE FACILITYEvaluation, purchase, set up, and maintenance of the computer hardware and network for the 170 users in the research...

  10. 24th & 25th Joint Workshop on Sustained Simulation Performance

    CERN Document Server

    Bez, Wolfgang; Focht, Erich; Gienger, Michael; Kobayashi, Hiroaki

    2017-01-01

    This book presents the state of the art in High Performance Computing on modern supercomputer architectures. It addresses trends in hardware and software development in general, as well as the future of High Performance Computing systems and heterogeneous architectures. The contributions cover a broad range of topics, from improved system management to Computational Fluid Dynamics, High Performance Data Analytics, and novel mathematical approaches for large-scale systems. In addition, they explore innovative fields like coupled multi-physics and multi-scale simulations. All contributions are based on selected papers presented at the 24th Workshop on Sustained Simulation Performance, held at the University of Stuttgart’s High Performance Computing Center in Stuttgart, Germany in December 2016 and the subsequent Workshop on Sustained Simulation Performance, held at the Cyberscience Center, Tohoku University, Japan in March 2017.

  11. THE CENTER FOR DATA INTENSIVE COMPUTING

    Energy Technology Data Exchange (ETDEWEB)

    GLIMM,J.

    2002-11-01

    CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

  12. THE CENTER FOR DATA INTENSIVE COMPUTING

    Energy Technology Data Exchange (ETDEWEB)

    GLIMM,J.

    2001-11-01

    CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

  13. THE CENTER FOR DATA INTENSIVE COMPUTING

    International Nuclear Information System (INIS)

    GLIMM, J.

    2001-01-01

    CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook

  14. THE CENTER FOR DATA INTENSIVE COMPUTING

    Energy Technology Data Exchange (ETDEWEB)

    GLIMM,J.

    2003-11-01

    CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

  15. Development of a computer system at La Hague center

    International Nuclear Information System (INIS)

    Mimaud, Robert; Malet, Georges; Ollivier, Francis; Fabre, J.-C.; Valois, Philippe; Desgranges, Patrick; Anfossi, Gilbert; Gentizon, Michel; Serpollet, Roger.

    1977-01-01

    The U.P.2 plant, built at La Hague Center is intended mainly for the reprocessing of spent fuels coming from (as metal) graphite-gas reactors and (as oxide) light-water, heavy-water and breeder reactors. In each of the five large nuclear units the digital processing of measurements was dealt with until 1974 by CAE 3030 data processors. During the period 1974-1975 a modern industrial computer system was set up. This system, equipped with T 2000/20 material from the Telemecanique company, consists of five measurement acquisition devices (for a total of 1500 lines processed) and two central processing units (CPU). The connection of these two PCU (Hardware and Software) enables an automatic connection of the system either on the first CPU or on the second one. The system covers, at present, data processing, threshold monitoring, alarm systems, display devices, periodical listing, and specific calculations concerning the process (balances etc), and at a later stage, an automatic control of certain units of the Process [fr

  16. Abstracts of digital computer code packages assembled by the Radiation Shielding Information Center

    International Nuclear Information System (INIS)

    Carter, B.J.; Maskewitz, B.F.

    1985-04-01

    This publication, ORNL/RSIC-13, Volumes I to III Revised, has resulted from an internal audit of the first 168 packages of computing technology in the Computer Codes Collection (CCC) of the Radiation Shielding Information Center (RSIC). It replaces the earlier three documents published as single volumes between 1966 to 1972. A significant number of the early code packages were considered to be obsolete and were removed from the collection in the audit process and the CCC numbers were not reassigned. Others not currently being used by the nuclear R and D community were retained in the collection to preserve technology not replaced by newer methods, or were considered of potential value for reference purposes. Much of the early technology, however, has improved through developer/RSIC/user interaction and continues at the forefront of the advancing state-of-the-art

  17. Abstracts of digital computer code packages assembled by the Radiation Shielding Information Center

    Energy Technology Data Exchange (ETDEWEB)

    Carter, B.J.; Maskewitz, B.F.

    1985-04-01

    This publication, ORNL/RSIC-13, Volumes I to III Revised, has resulted from an internal audit of the first 168 packages of computing technology in the Computer Codes Collection (CCC) of the Radiation Shielding Information Center (RSIC). It replaces the earlier three documents published as single volumes between 1966 to 1972. A significant number of the early code packages were considered to be obsolete and were removed from the collection in the audit process and the CCC numbers were not reassigned. Others not currently being used by the nuclear R and D community were retained in the collection to preserve technology not replaced by newer methods, or were considered of potential value for reference purposes. Much of the early technology, however, has improved through developer/RSIC/user interaction and continues at the forefront of the advancing state-of-the-art.

  18. A hypothesis on the formation of the primary ossification centers in the membranous neurocranium: a mathematical and computational model.

    Science.gov (United States)

    Garzón-Alvarado, Diego A

    2013-01-21

    This article develops a model of the appearance and location of the primary centers of ossification in the calvaria. The model uses a system of reaction-diffusion equations of two molecules (BMP and Noggin) whose behavior is of type activator-substrate and its solution produces Turing patterns, which represents the primary ossification centers. Additionally, the model includes the level of cell maturation as a function of the location of mesenchymal cells. Thus the mature cells can become osteoblasts due to the action of BMP2. Therefore, with this model, we can have two frontal primary centers, two parietal, and one, two or more occipital centers. The location of these centers in the simplified computational model is highly consistent with those centers found at an embryonic level. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. TAPP - Stuttgart technique and result of a large single center series

    Directory of Open Access Journals (Sweden)

    Bittner R

    2006-01-01

    Full Text Available Laparoscopic hernioplasty is assessed as a difficult operation. Operative technique determines the frequency of complications, the time of recovery and the rate of recurrences. A proper technique is absolutely necessary to achieve results that are superior to open hernia surgery. Technique: The key points in our technique are 1 use of nondisposable instruments; 2 use of blunt trocars, consisting of expanding and non-incisive cone-shaped tips; 3 spacious and curved opening to the peritoneum, high above all possible hernia openings; 4 meticulous dissection of the entire pelvic floor; 5 complete reduction of the hernial sac; 6 wide parietalization of the peritoneal sac, at least down to the mid of psoas muscle; 7 implantation of a large mesh, at least 10 cm x 15 cm; 8 fixation of the mesh by clip to Cooper′s ligament, to the rectus muscle and lateral to the epigastric vessels, high above the ileopubic tract; 9 the use of glue allows fixation also to the latero-caudial region; and 10 closure of the peritoneum by running suture. Results: With this technique in 12,678 hernia repairs, the following results could be achieved: operating time - 40 min; morbidity - 2.9%; recurrence rate - 0.7%; disability of work - 14 days. In all types of hernias (recurrence after previous open surgery, recurrence after previous preperitoneal operation, scrotal hernia, hernia in patients after transabdominal prostate resection, similar results could be achieved. Summary: Laparoscopic hernia repair can be performed successfully in clinical practice even by surgeons in training. Precondition for the success is a strictly standardized operative technique and a well-structured educational program.

  20. Computational-physics program of the National MFE Computer Center

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1982-02-01

    The computational physics group is ivolved in several areas of fusion research. One main area is the application of multidimensional Fokker-Planck, transport and combined Fokker-Planck/transport codes to both toroidal and mirror devices. Another major area is the investigation of linear and nonlinear resistive magnetohydrodynamics in two and three dimensions, with applications to all types of fusion devices. The MHD work is often coupled with the task of numerically generating equilibria which model experimental devices. In addition to these computational physics studies, investigations of more efficient numerical algorithms are being carried out

  1. The Benefits of Making Data from the EPA National Center for Computational Toxicology available for reuse (ACS Fall meeting 3 of 12)

    Science.gov (United States)

    Researchers at EPA’s National Center for Computational Toxicology (NCCT) integrate advances in biology, chemistry, exposure and computer science to help prioritize chemicals for further research based on potential human health risks. The goal of this research is to quickly evalua...

  2. Outline of computer application in PNC

    International Nuclear Information System (INIS)

    Aoki, Minoru

    1990-01-01

    Computer application systems are an important resource for the R and D (research and development) in PNC. Various types of computer systems are widely used on the R and D of experiment, evaluation and analysis, plant operation and other jobs in PNC. Currently, the computer centers in PNC have been established in Oarai engineering Center and Tokai Works. The former uses a large scale digital computer and supercomputer systems. The latter uses only a large scale digital computer system. These computer systems have joined in the PNC Information Network that connects between Head Office and Branches, Oarai, Tokai, Ningyotoge and Fugen, by means of super digital circuit. In the near future, the computer centers will be brought together in order to raise up efficiency of operation of the computer systems. New computer center called 'Information Center' is under construction in Oarai Engineering Center. (author)

  3. Risk factors for computer visual syndrome (CVS) among operators of two call centers in São Paulo, Brazil.

    Science.gov (United States)

    Sa, Eduardo Costa; Ferreira Junior, Mario; Rocha, Lys Esther

    2012-01-01

    The aims of this study were to investigate work conditions, to estimate the prevalence and to describe risk factors associated with Computer Vision Syndrome among two call centers' operators in São Paulo (n = 476). The methods include a quantitative cross-sectional observational study and an ergonomic work analysis, using work observation, interviews and questionnaires. The case definition was the presence of one or more specific ocular symptoms answered as always, often or sometimes. The multiple logistic regression model, were created using the stepwise forward likelihood method and remained the variables with levels below 5% (p vision (43.5%). The prevalence of Computer Vision Syndrome was 54.6%. Associations verified were: being female (OR 2.6, 95% CI 1.6 to 4.1), lack of recognition at work (OR 1.4, 95% CI 1.1 to 1.8), organization of work in call center (OR 1.4, 95% CI 1.1 to 1.7) and high demand at work (OR 1.1, 95% CI 1.0 to 1.3). The organization and psychosocial factors at work should be included in prevention programs of visual syndrome among call centers' operators.

  4. The computational physics program of the National MFE Computer Center

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1988-01-01

    The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generation of supercomputers. The computational physics group is involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to compact toroids. Another major area is the investigation of kinetic instabilities using a 3-D particle code. This work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence are being examined. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers

  5. [Text mining, a method for computer-assisted analysis of scientific texts, demonstrated by an analysis of author networks].

    Science.gov (United States)

    Hahn, P; Dullweber, F; Unglaub, F; Spies, C K

    2014-06-01

    Searching for relevant publications is becoming more difficult with the increasing number of scientific articles. Text mining as a specific form of computer-based data analysis may be helpful in this context. Highlighting relations between authors and finding relevant publications concerning a specific subject using text analysis programs are illustrated graphically by 2 performed examples. © Georg Thieme Verlag KG Stuttgart · New York.

  6. Threat and vulnerability analysis and conceptual design of countermeasures for a computer center under construction

    International Nuclear Information System (INIS)

    Rozen, A.; Musacchio, J.M.

    1988-01-01

    This project involved the assessment of a new computer center to be used as the main national data processing facility of a large European bank. This building serves as the principal facility in the country with all other branches utilizing the data processing center. As such, the building is a crucial target which may attract terrorist attacks. Threat and vulnerability assessments were performed as a basis to define and overall fully-integrated security system of passive and active countermeasures for the facility. After separately assessing the range of threats and vulnerabilities, a combined matrix of threats and vulnerabilities was used to identify the crucial combinations. A set of architectural-structural passive measures was added to the active components of the security system

  7. Computed tomography evaluation of rotary systems on the root canal transportation and centering ability

    Directory of Open Access Journals (Sweden)

    André PAGLIOSA

    2015-01-01

    Full Text Available Abstract : The endodontic preparation of curved and narrow root canals is challenging, with a tendency for the prepared canal to deviate away from its natural axis. The aim of this study was to evaluate, by cone-beam computed tomography, the transportation and centering ability of curved mesiobuccal canals in maxillary molars after biomechanical preparation with different nickel-titanium (NiTi rotary systems. Forty teeth with angles of curvature ranging from 20° to 40° and radii between 5.0 mm and 10.0 mm were selected and assigned into four groups (n = 10, according to the biomechanical preparative system used: Hero 642 (HR, Liberator (LB, ProTaper (PT, and Twisted File (TF. The specimens were inserted into an acrylic device and scanned with computed tomography prior to, and following, instrumentation at 3, 6 and 9 mm from the root apex. The canal degree of transportation and centering ability were calculated and analyzed using one-way ANOVA and Tukey’s tests (α = 0.05. The results demonstrated no significant difference (p > 0.05 in shaping ability among the rotary systems. The mean canal transportation was: -0.049 ± 0.083 mm (HR; -0.004 ± 0.044 mm (LB; -0.003 ± 0.064 mm (PT; -0.021 ± 0.064 mm (TF. The mean canal centering ability was: -0.093 ± 0.147 mm (HR; -0.001 ± 0.100 mm (LB; -0.002 ± 0.134 mm (PT; -0.033 ± 0.133 mm (TF. Also, there was no significant difference among the root segments (p > 0.05. It was concluded that the Hero 642, Liberator, ProTaper, and Twisted File rotary systems could be safely used in curved canal instrumentation, resulting in satisfactory preservation of the original canal shape.

  8. Computed tomography evaluation of rotary systems on the root canal transportation and centering ability

    International Nuclear Information System (INIS)

    Pagliosa, Andre; Raucci-Neto, Walter; Silva-Souza, Yara Teresinha Correa; Alfredo, Edson; Sousa-Neto, Manoel Damiao; Versiani, Marco Aurelio

    2015-01-01

    The endodontic preparation of curved and narrow root canals is challenging, with a tendency for the prepared canal to deviate away from its natural axis. The aim of this study was to evaluate, by cone-beam computed tomography, the transportation and centering ability of curved mesiobuccal canals in maxillary molars after biomechanical preparation with different nickel-titanium (NiTi) rotary systems. Forty teeth with angles of curvature ranging from 20° to 40° and radii between 5.0 mm and 10.0 mm were selected and assigned into four groups (n = 10), according to the biomechanical preparative system used: Hero 642 (HR), Liberator (LB), ProTaper (PT), and Twisted File (TF). The specimens were inserted into an acrylic device and scanned with computed tomography prior to, and following, instrumentation at 3, 6 and 9 mm from the root apex. The canal degree of transportation and centering ability were calculated and analyzed using one-way ANOVA and Tukey’s tests (α = 0.05). The results demonstrated no significant difference (p > 0.05) in shaping ability among the rotary systems. The mean canal transportation was: -0.049 ± 0.083 mm (HR); -0.004 ± 0.044 mm (LB); -0.003 ± 0.064 mm (PT); -0.021 ± 0.064 mm (TF). The mean canal centering ability was: -0.093 ± 0.147 mm (HR); -0.001 ± 0.100 mm (LB); -0.002 ± 0.134 mm (PT); -0.033 ± 0.133 mm (TF). Also, there was no significant difference among the root segments (p > 0.05). It was concluded that the Hero 642, Liberator, ProTaper, and Twisted File rotary systems could be safely used in curved canal instrumentation, resulting in satisfactory preservation of the original canal shape. (author)

  9. Computed tomography evaluation of rotary systems on the root canal transportation and centering ability

    Energy Technology Data Exchange (ETDEWEB)

    Pagliosa, Andre; Raucci-Neto, Walter; Silva-Souza, Yara Teresinha Correa; Alfredo, Edson, E-mail: ysousa@unaerp.br [Universidade de Ribeirao Preto (UNAERP), SP (Brazil). Fac. de Odontologia; Sousa-Neto, Manoel Damiao; Versiani, Marco Aurelio [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil). Fac. de Odoentologia

    2015-03-01

    The endodontic preparation of curved and narrow root canals is challenging, with a tendency for the prepared canal to deviate away from its natural axis. The aim of this study was to evaluate, by cone-beam computed tomography, the transportation and centering ability of curved mesiobuccal canals in maxillary molars after biomechanical preparation with different nickel-titanium (NiTi) rotary systems. Forty teeth with angles of curvature ranging from 20° to 40° and radii between 5.0 mm and 10.0 mm were selected and assigned into four groups (n = 10), according to the biomechanical preparative system used: Hero 642 (HR), Liberator (LB), ProTaper (PT), and Twisted File (TF). The specimens were inserted into an acrylic device and scanned with computed tomography prior to, and following, instrumentation at 3, 6 and 9 mm from the root apex. The canal degree of transportation and centering ability were calculated and analyzed using one-way ANOVA and Tukey’s tests (α = 0.05). The results demonstrated no significant difference (p > 0.05) in shaping ability among the rotary systems. The mean canal transportation was: -0.049 ± 0.083 mm (HR); -0.004 ± 0.044 mm (LB); -0.003 ± 0.064 mm (PT); -0.021 ± 0.064 mm (TF). The mean canal centering ability was: -0.093 ± 0.147 mm (HR); -0.001 ± 0.100 mm (LB); -0.002 ± 0.134 mm (PT); -0.033 ± 0.133 mm (TF). Also, there was no significant difference among the root segments (p > 0.05). It was concluded that the Hero 642, Liberator, ProTaper, and Twisted File rotary systems could be safely used in curved canal instrumentation, resulting in satisfactory preservation of the original canal shape. (author)

  10. Certification of version 1.2 of the PORFLO-3 code for the WHC scientific and engineering computational center

    International Nuclear Information System (INIS)

    Kline, N.W.

    1994-01-01

    Version 1.2 of the PORFLO-3 Code has migrated from the Hanford Cray computer to workstations in the WHC Scientific and Engineering Computational Center. The workstation-based configuration and acceptance testing are inherited from the CRAY-based configuration. The purpose of this report is to document differences in the new configuration as compared to the parent Cray configuration, and summarize some of the acceptance test results which have shown that the migrated code is functioning correctly in the new environment

  11. Visual problems in young adults due to computer use.

    Science.gov (United States)

    Moschos, M M; Chatziralli, I P; Siasou, G; Papazisis, L

    2012-04-01

    Computer use can cause visual problems. The purpose of our study was to evaluate visual problems due to computer use in young adults. Participants in our study were 87 adults, 48 male and 39 female, mean aged 31.3 years old (SD 7.6). All the participants completed a questionnaire regarding visual problems detected after computer use. The mean daily use of computers was 3.2 hours (SD 2.7). 65.5 % of the participants complained for dry eye, mainly after more than 2.5 hours of computer use. 32 persons (36.8 %) had a foreign body sensation in their eyes, while 15 participants (17.2 %) complained for blurred vision which caused difficulties in driving, after 3.25 hours of continuous computer use. 10.3 % of the participants sought medical advice for their problem. There was a statistically significant correlation between the frequency of visual problems and the duration of computer use (p = 0.021). 79.3 % of the participants use artificial tears during or after long use of computers, so as not to feel any ocular discomfort. The main symptom after computer use in young adults was dry eye. All visual problems associated with the duration of computer use. Artificial tears play an important role in the treatment of ocular discomfort after computer use. © Georg Thieme Verlag KG Stuttgart · New York.

  12. Walkability is Only Part of the Story: Walking for Transportation in Stuttgart, Germany

    Directory of Open Access Journals (Sweden)

    Maren Reyer

    2014-05-01

    Full Text Available In modern Western societies people often lead inactive and sedentary lifestyles, even though there is no doubt that physical activity and health are related. From an urban planning point of view it would be highly desirable to develop built environments in a way that supports people in leading more active and healthy lifestyles. Within this context there are several methods, predominantly used in the US, to measure the suitability of built environments for walking and cycling. Empirical studies show that people living in highly walkable areas are more physically active (for example, walk more or cycle more. The question is, however, whether these results are also valid for European cities given their different urban planning characteristics and infrastructure standards. To answer this question we used the Walkability-Index and the Walk Score to empirically investigate the associations between walkability and active transportation in the city of Stuttgart, Germany. In a sample of household survey data (n = 1.871 we found a noticeable relationship between walkability and active transportation—the more walkable an area was, the more active residents were. Although the statistical effect is small, the health impact might be of relevance. Being physically active is multi-determined and not only affected by the walkability of an area. We highlight these points with an excursion into research that the health and exercise sciences contribute to the topic. We propose to strengthen interdisciplinary research between the disciplines and to specifically collect data that captures the influence of the environment on physical activity in the future.

  13. Walkability is only part of the story: walking for transportation in Stuttgart, Germany.

    Science.gov (United States)

    Reyer, Maren; Fina, Stefan; Siedentop, Stefan; Schlicht, Wolfgang

    2014-05-30

    In modern Western societies people often lead inactive and sedentary lifestyles, even though there is no doubt that physical activity and health are related. From an urban planning point of view it would be highly desirable to develop built environments in a way that supports people in leading more active and healthy lifestyles. Within this context there are several methods, predominantly used in the US, to measure the suitability of built environments for walking and cycling. Empirical studies show that people living in highly walkable areas are more physically active (for example, walk more or cycle more). The question is, however, whether these results are also valid for European cities given their different urban planning characteristics and infrastructure standards. To answer this question we used the Walkability-Index and the Walk Score to empirically investigate the associations between walkability and active transportation in the city of Stuttgart, Germany. In a sample of household survey data (n = 1.871) we found a noticeable relationship between walkability and active transportation-the more walkable an area was, the more active residents were. Although the statistical effect is small, the health impact might be of relevance. Being physically active is multi-determined and not only affected by the walkability of an area. We highlight these points with an excursion into research that the health and exercise sciences contribute to the topic. We propose to strengthen interdisciplinary research between the disciplines and to specifically collect data that captures the influence of the environment on physical activity in the future.

  14. 8th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Gracia, José; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang

    2015-01-01

    Numerical simulation and modelling using High Performance Computing has evolved into an established technique in academic and industrial research. At the same time, the High Performance Computing infrastructure is becoming ever more complex. For instance, most of the current top systems around the world use thousands of nodes in which classical CPUs are combined with accelerator cards in order to enhance their compute power and energy efficiency. This complexity can only be mastered with adequate development and optimization tools. Key topics addressed by these tools include parallelization on heterogeneous systems, performance optimization for CPUs and accelerators, debugging of increasingly complex scientific applications, and optimization of energy usage in the spirit of green IT. This book represents the proceedings of the 8th International Parallel Tools Workshop, held October 1-2, 2014 in Stuttgart, Germany – which is a forum to discuss the latest advancements in the parallel tools.

  15. Hock, Beáta. 2013. Gendered Artistic Positions and Social Voices - Politics, Cinema and the Visual Arts in State-Socialist and Post-Socialist Hungary. Stuttgart: Franz Steiner Verlag. 284 pp. illus.

    Directory of Open Access Journals (Sweden)

    Lilla Tőke

    2016-01-01

    Full Text Available Hock, Beáta. 2013. Gendered Artistic Positions and Social Voices - Politics, Cinema and the Visual Arts in State-Socialist and Post-Socialist Hungary. Stuttgart: Franz Steiner Verlag. 284 pp. illus. Reviewed by Lilla Tőke, Assistant Professor, City University of New York, LaGuardia Community College

  16. Spectrum of tablet computer use by medical students and residents at an academic medical center

    Directory of Open Access Journals (Sweden)

    Robert Robinson

    2015-07-01

    Full Text Available Introduction. The value of tablet computer use in medical education is an area of considerable interest, with preliminary investigations showing that the majority of medical trainees feel that tablet computers added value to the curriculum. This study investigated potential differences in tablet computer use between medical students and resident physicians.Materials & Methods. Data collection for this survey was accomplished with an anonymous online questionnaire shared with the medical students and residents at Southern Illinois University School of Medicine (SIU-SOM in July and August of 2012.Results. There were 76 medical student responses (26% response rate and 66 resident/fellow responses to this survey (21% response rate. Residents/fellows were more likely to use tablet computers several times daily than medical students (32% vs. 20%, p = 0.035. The most common reported uses were for accessing medical reference applications (46%, e-Books (45%, and board study (32%. Residents were more likely than students to use a tablet computer to access an electronic medical record (41% vs. 21%, p = 0.010, review radiology images (27% vs. 12%, p = 0.019, and enter patient care orders (26% vs. 3%, p < 0.001.Discussion. This study shows a high prevalence and frequency of tablet computer use among physicians in training at this academic medical center. Most residents and students use tablet computers to access medical references, e-Books, and to study for board exams. Residents were more likely to use tablet computers to complete clinical tasks.Conclusions. Tablet computer use among medical students and resident physicians was common in this survey. All learners used tablet computers for point of care references and board study. Resident physicians were more likely to use tablet computers to access the EMR, enter patient care orders, and review radiology studies. This difference is likely due to the differing educational and professional demands placed on

  17. Pain, Work-related Characteristics, and Psychosocial Factors among Computer Workers at a University Center.

    Science.gov (United States)

    Mainenti, Míriam Raquel Meira; Felicio, Lilian Ramiro; Rodrigues, Erika de Carvalho; Ribeiro da Silva, Dalila Terrinha; Vigário Dos Santos, Patrícia

    2014-04-01

    [Purpose] Complaint of pain is common in computer workers, encouraging the investigation of pain-related workplace factors. This study investigated the relationship among work-related characteristics, psychosocial factors, and pain among computer workers from a university center. [Subjects and Methods] Fifteen subjects (median age, 32.0 years; interquartile range, 26.8-34.5 years) were subjected to measurement of bioelectrical impedance; photogrammetry; workplace measurements; and pain complaint, quality of life, and motivation questionnaires. [Results] The low back was the most prevalent region of complaint (76.9%). The number of body regions for which subjects complained of pain was greater in the no rest breaks group, which also presented higher prevalences of neck (62.5%) and low back (100%) pain. There were also observed associations between neck complaint and quality of life; neck complaint and head protrusion; wrist complaint and shoulder angle; and use of a chair back and thoracic pain. [Conclusion] Complaint of pain was associated with no short rest breaks, no use of a chair back, poor quality of life, high head protrusion, and shoulder angle while using the mouse of a computer.

  18. Lecture 4: Cloud Computing in Large Computer Centers

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    This lecture will introduce Cloud Computing concepts identifying and analyzing its characteristics, models, and applications. Also, you will learn how CERN built its Cloud infrastructure and which tools are been used to deploy and manage it. About the speaker: Belmiro Moreira is an enthusiastic software engineer passionate about the challenges and complexities of architecting and deploying Cloud Infrastructures in ve...

  19. An Analysis of Cloud Computing with Amazon Web Services for the Atmospheric Science Data Center

    Science.gov (United States)

    Gleason, J. L.; Little, M. M.

    2013-12-01

    NASA science and engineering efforts rely heavily on compute and data handling systems. The nature of NASA science data is such that it is not restricted to NASA users, instead it is widely shared across a globally distributed user community including scientists, educators, policy decision makers, and the public. Therefore NASA science computing is a candidate use case for cloud computing where compute resources are outsourced to an external vendor. Amazon Web Services (AWS) is a commercial cloud computing service developed to use excess computing capacity at Amazon, and potentially provides an alternative to costly and potentially underutilized dedicated acquisitions whenever NASA scientists or engineers require additional data processing. AWS desires to provide a simplified avenue for NASA scientists and researchers to share large, complex data sets with external partners and the public. AWS has been extensively used by JPL for a wide range of computing needs and was previously tested on a NASA Agency basis during the Nebula testing program. Its ability to support the Langley Science Directorate needs to be evaluated by integrating it with real world operational needs across NASA and the associated maturity that would come with that. The strengths and weaknesses of this architecture and its ability to support general science and engineering applications has been demonstrated during the previous testing. The Langley Office of the Chief Information Officer in partnership with the Atmospheric Sciences Data Center (ASDC) has established a pilot business interface to utilize AWS cloud computing resources on a organization and project level pay per use model. This poster discusses an effort to evaluate the feasibility of the pilot business interface from a project level perspective by specifically using a processing scenario involving the Clouds and Earth's Radiant Energy System (CERES) project.

  20. Changing the batch system in a Tier 1 computing center: why and how

    Science.gov (United States)

    Chierici, Andrea; Dal Pra, Stefano

    2014-06-01

    At the Italian Tierl Center at CNAF we are evaluating the possibility to change the current production batch system. This activity is motivated mainly because we are looking for a more flexible licensing model as well as to avoid vendor lock-in. We performed a technology tracking exercise and among many possible solutions we chose to evaluate Grid Engine as an alternative because its adoption is increasing in the HEPiX community and because it's supported by the EMI middleware that we currently use on our computing farm. Another INFN site evaluated Slurm and we will compare our results in order to understand pros and cons of the two solutions. We will present the results of our evaluation of Grid Engine, in order to understand if it can fit the requirements of a Tier 1 center, compared to the solution we adopted long ago. We performed a survey and a critical re-evaluation of our farming infrastructure: many production softwares (accounting and monitoring on top of all) rely on our current solution and changing it required us to write new wrappers and adapt the infrastructure to the new system. We believe the results of this investigation can be very useful to other Tier-ls and Tier-2s centers in a similar situation, where the effort of switching may appear too hard to stand. We will provide guidelines in order to understand how difficult this operation can be and how long the change may take.

  1. Human factors in computing systems: focus on patient-centered health communication at the ACM SIGCHI conference.

    Science.gov (United States)

    Wilcox, Lauren; Patel, Rupa; Chen, Yunan; Shachak, Aviv

    2013-12-01

    Health Information Technologies, such as electronic health records (EHR) and secure messaging, have already transformed interactions among patients and clinicians. In addition, technologies supporting asynchronous communication outside of clinical encounters, such as email, SMS, and patient portals, are being increasingly used for follow-up, education, and data reporting. Meanwhile, patients are increasingly adopting personal tools to track various aspects of health status and therapeutic progress, wishing to review these data with clinicians during consultations. These issues have drawn increasing interest from the human-computer interaction (HCI) community, with special focus on critical challenges in patient-centered interactions and design opportunities that can address these challenges. We saw this community presenting and interacting at the ACM SIGCHI 2013, Conference on Human Factors in Computing Systems, (also known as CHI), held April 27-May 2nd, 2013 at the Palais de Congrès de Paris in France. CHI 2013 featured many formal avenues to pursue patient-centered health communication: a well-attended workshop, tracks of original research, and a lively panel discussion. In this report, we highlight these events and the main themes we identified. We hope that it will help bring the health care communication and the HCI communities closer together. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. The Internet and Computer User Profile: a questionnaire for determining intervention targets in occupational therapy at mental health vocational centers.

    Science.gov (United States)

    Regev, Sivan; Hadas-Lidor, Noami; Rosenberg, Limor

    2016-08-01

    In this study, the assessment tool "Internet and Computer User Profile" questionnaire (ICUP) is presented and validated. It was developed in order to gather information for setting intervention goals to meet current demands. Sixty-eight subjects aged 23-68 participated in the study. The study group (n = 28) was sampled from two vocational centers. The control group consisted of 40 participants from the general population that were sampled by convenience sampling based on the demographics of the study group. Subjects from both groups answered the ICUP questionnaire. Subjects of the study group answered the General Self- Efficacy (GSE) questionnaire and performed the Assessment of Computer Task Performance (ACTP) test in order to examine the convergent validity of the ICUP. Twenty subjects from both groups retook the ICUP questionnaire in order to obtain test-retest results. Differences between groups were tested using multiple analysis of variance (MANOVA) tests. Pearson and Spearman's tests were used for calculating correlations. Cronbach's alpha coefficient and k equivalent were used to assess internal consistency. The results indicate that the questionnaire is valid and reliable. They emphasize that the layout of the ICUP items facilitates in making a comprehensive examination of the client's perception regarding his participation in computer and internet activities. Implications for Rehabiliation The assessment tool "Internet and Computer User Profile" (ICUP) questionnaire is a novel assessment tool that evaluates operative use and individual perception of computer activities. The questionnaire is valid and reliable for use with participants of vocational centers dealing with mental illness. It is essential to facilitate access to computers for people with mental illnesses, seeing that they express similar interest in computers and internet as people from the general population of the same age. Early intervention will be particularly effective for young

  3. Data Center Consolidation: A Step towards Infrastructure Clouds

    Science.gov (United States)

    Winter, Markus

    Application service providers face enormous challenges and rising costs in managing and operating a growing number of heterogeneous system and computing landscapes. Limitations of traditional computing environments force IT decision-makers to reorganize computing resources within the data center, as continuous growth leads to an inefficient utilization of the underlying hardware infrastructure. This paper discusses a way for infrastructure providers to improve data center operations based on the findings of a case study on resource utilization of very large business applications and presents an outlook beyond server consolidation endeavors, transforming corporate data centers into compute clouds.

  4. 10th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Gracia, José; Hilbrich, Tobias; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang

    2017-01-01

    This book presents the proceedings of the 10th International Parallel Tools Workshop, held October 4-5, 2016 in Stuttgart, Germany – a forum to discuss the latest advances in parallel tools. High-performance computing plays an increasingly important role for numerical simulation and modelling in academic and industrial research. At the same time, using large-scale parallel systems efficiently is becoming more difficult. A number of tools addressing parallel program development and analysis have emerged from the high-performance computing community over the last decade, and what may have started as collection of small helper script has now matured to production-grade frameworks. Powerful user interfaces and an extensive body of documentation allow easy usage by non-specialists.

  5. The Soviet center of astronomical data

    International Nuclear Information System (INIS)

    Dluzhnevskaya, O.B.

    1982-01-01

    On the basis of the current French-Soviet cooperation in science and technology, the Astronomical Council of the U.S.S.R. Academy of Sciences and the Strasbourg Center signed in 1977 an agreement on setting up the Soviet Center of Astronomical Data as its filial branch. The Soviet Center was created on the basis of a computation center at the Zvenigorod station of the Astronomical Council of the U.S.S.R. Academy of Sciences, which had already had considerable experience of working with stellar catalogues. In 1979 the Center was equipped with a EC-1033 computer. In 1978-1979 the Soviet Center of Astronomical Data (C.A.D.) received from Strasbourg 96 of the most important catalogues. By September 1981 the list of catalogues available at the Soviet Center has reached 140 catalogues some of which are described. (Auth.)

  6. Solving algebraic computational problems in geodesy and geoinformatics the answer to modern challenges

    CERN Document Server

    Awange, Joseph L

    2004-01-01

    While preparing and teaching 'Introduction to Geodesy I and II' to - dergraduate students at Stuttgart University, we noticed a gap which motivated the writing of the present book: Almost every topic that we taughtrequiredsomeskillsinalgebra,andinparticular,computeral- bra! From positioning to transformation problems inherent in geodesy and geoinformatics, knowledge of algebra and application of computer algebra software were required. In preparing this book therefore, we haveattemptedtoputtogetherbasicconceptsofabstractalgebra which underpin the techniques for solving algebraic problems. Algebraic c- putational algorithms useful for solving problems which require exact solutions to nonlinear systems of equations are presented and tested on various problems. Though the present book focuses mainly on the two ?elds,theconceptsand techniquespresented hereinarenonetheless- plicable to other ?elds where algebraic computational problems might be encountered. In Engineering for example, network densi?cation and robo...

  7. Computing at Stanford.

    Science.gov (United States)

    Feigenbaum, Edward A.; Nielsen, Norman R.

    1969-01-01

    This article provides a current status report on the computing and computer science activities at Stanford University, focusing on the Computer Science Department, the Stanford Computation Center, the recently established regional computing network, and the Institute for Mathematical Studies in the Social Sciences. Also considered are such topics…

  8. Cloud Computing Applications in Support of Earth Science Activities at Marshall Space Flight Center

    Science.gov (United States)

    Molthan, A.; Limaye, A. S.

    2011-12-01

    Currently, the NASA Nebula Cloud Computing Platform is available to Agency personnel in a pre-release status as the system undergoes a formal operational readiness review. Over the past year, two projects within the Earth Science Office at NASA Marshall Space Flight Center have been investigating the performance and value of Nebula's "Infrastructure as a Service", or "IaaS" concept and applying cloud computing concepts to advance their respective mission goals. The Short-term Prediction Research and Transition (SPoRT) Center focuses on the transition of unique NASA satellite observations and weather forecasting capabilities for use within the operational forecasting community through partnerships with NOAA's National Weather Service (NWS). SPoRT has evaluated the performance of the Weather Research and Forecasting (WRF) model on virtual machines deployed within Nebula and used Nebula instances to simulate local forecasts in support of regional forecast studies of interest to select NWS forecast offices. In addition to weather forecasting applications, rapidly deployable Nebula virtual machines have supported the processing of high resolution NASA satellite imagery to support disaster assessment following the historic severe weather and tornado outbreak of April 27, 2011. Other modeling and satellite analysis activities are underway in support of NASA's SERVIR program, which integrates satellite observations, ground-based data and forecast models to monitor environmental change and improve disaster response in Central America, the Caribbean, Africa, and the Himalayas. Leveraging SPoRT's experience, SERVIR is working to establish a real-time weather forecasting model for Central America. Other modeling efforts include hydrologic forecasts for Kenya, driven by NASA satellite observations and reanalysis data sets provided by the broader meteorological community. Forecast modeling efforts are supplemented by short-term forecasts of convective initiation, determined by

  9. Initial Flight Test of the Production Support Flight Control Computers at NASA Dryden Flight Research Center

    Science.gov (United States)

    Carter, John; Stephenson, Mark

    1999-01-01

    The NASA Dryden Flight Research Center has completed the initial flight test of a modified set of F/A-18 flight control computers that gives the aircraft a research control law capability. The production support flight control computers (PSFCC) provide an increased capability for flight research in the control law, handling qualities, and flight systems areas. The PSFCC feature a research flight control processor that is "piggybacked" onto the baseline F/A-18 flight control system. This research processor allows for pilot selection of research control law operation in flight. To validate flight operation, a replication of a standard F/A-18 control law was programmed into the research processor and flight-tested over a limited envelope. This paper provides a brief description of the system, summarizes the initial flight test of the PSFCC, and describes future experiments for the PSFCC.

  10. Annual report of R and D activities in center for promotion of computational science and engineering from April 1, 2003 to March 31, 2004

    International Nuclear Information System (INIS)

    2005-08-01

    Major Research and development activities of Center for Promotion of Computational Science and Engineering (CCSE), JAERI, have focused on ITBL (IT Based Laboratory) project, computational material science and Quantum Bioinformatics. This report provides an overview of research and development activities in (CCSE) in the fiscal year 2003 (April 1, 2003 - March 31, 2004). (author)

  11. Environmental Modeling Center

    Data.gov (United States)

    Federal Laboratory Consortium — The Environmental Modeling Center provides the computational tools to perform geostatistical analysis, to model ground water and atmospheric releases for comparison...

  12. [Academic procrastination in clients of a psychotherapeutic student counselling center].

    Science.gov (United States)

    Jamrozinski, Katja; Kuda, Manfred; Mangholz, Astrid

    2009-01-01

    The start of university education is the beginning of a new phase of life for young adults, which requires significant psychosocial adjustments. Sociobiographical data, clinical symptoms, characteristics of education, work attitude, and career perspectives were gathered from 152 clients by a psychotherapeutic student counselling center to evaluate characteristics of students with and without academic procrastination. The procrastination group comprised heightened numbers of students who had changed universities, and people with suboptimal career prospects and career targets. These subjects were more often male and showed increased incidences of drug- and alcohol problems, as well as a lack of planning of the future. Furthermore, they had larger amounts of their study self-financed. On the basis of these results, concrete recommendations for preventive measures to improve on-time completion of study, and to prevent student drop-out are presented. Georg Thieme Verlag KG Stuttgart-New York.

  13. Cloud Computing Applications in Support of Earth Science Activities at Marshall Space Flight Center

    Science.gov (United States)

    Molthan, Andrew L.; Limaye, Ashutosh S.; Srikishen, Jayanthi

    2011-01-01

    Currently, the NASA Nebula Cloud Computing Platform is available to Agency personnel in a pre-release status as the system undergoes a formal operational readiness review. Over the past year, two projects within the Earth Science Office at NASA Marshall Space Flight Center have been investigating the performance and value of Nebula s "Infrastructure as a Service", or "IaaS" concept and applying cloud computing concepts to advance their respective mission goals. The Short-term Prediction Research and Transition (SPoRT) Center focuses on the transition of unique NASA satellite observations and weather forecasting capabilities for use within the operational forecasting community through partnerships with NOAA s National Weather Service (NWS). SPoRT has evaluated the performance of the Weather Research and Forecasting (WRF) model on virtual machines deployed within Nebula and used Nebula instances to simulate local forecasts in support of regional forecast studies of interest to select NWS forecast offices. In addition to weather forecasting applications, rapidly deployable Nebula virtual machines have supported the processing of high resolution NASA satellite imagery to support disaster assessment following the historic severe weather and tornado outbreak of April 27, 2011. Other modeling and satellite analysis activities are underway in support of NASA s SERVIR program, which integrates satellite observations, ground-based data and forecast models to monitor environmental change and improve disaster response in Central America, the Caribbean, Africa, and the Himalayas. Leveraging SPoRT s experience, SERVIR is working to establish a real-time weather forecasting model for Central America. Other modeling efforts include hydrologic forecasts for Kenya, driven by NASA satellite observations and reanalysis data sets provided by the broader meteorological community. Forecast modeling efforts are supplemented by short-term forecasts of convective initiation, determined by

  14. Energy efficient thermal management of data centers

    CERN Document Server

    Kumar, Pramod

    2012-01-01

    Energy Efficient Thermal Management of Data Centers examines energy flow in today's data centers. Particular focus is given to the state-of-the-art thermal management and thermal design approaches now being implemented across the multiple length scales involved. The impact of future trends in information technology hardware, and emerging software paradigms such as cloud computing and virtualization, on thermal management are also addressed. The book explores computational and experimental characterization approaches for determining temperature and air flow patterns within data centers. Thermodynamic analyses using the second law to improve energy efficiency are introduced and used in proposing improvements in cooling methodologies. Reduced-order modeling and robust multi-objective design of next generation data centers are discussed. This book also: Provides in-depth treatment of energy efficiency ideas based on  fundamental heat transfer, fluid mechanics, thermodynamics, controls, and computer science Focus...

  15. The Development of University Computing in Sweden 1965-1985

    Science.gov (United States)

    Dahlstrand, Ingemar

    In 1965-70 the government agency, Statskontoret, set up five university computing centers, as service bureaux financed by grants earmarked for computer use. The centers were well equipped and staffed and caused a surge in computer use. When the yearly flow of grant money stagnated at 25 million Swedish crowns, the centers had to find external income to survive and acquire time-sharing. But the charging system led to the computers not being fully used. The computer scientists lacked equipment for laboratory use. The centers were decentralized and the earmarking abolished. Eventually they got new tasks like running computers owned by the departments, and serving the university administration.

  16. Concurrent validity of an automated algorithm for computing the center of pressure excursion index (CPEI).

    Science.gov (United States)

    Diaz, Michelle A; Gibbons, Mandi W; Song, Jinsup; Hillstrom, Howard J; Choe, Kersti H; Pasquale, Maria R

    2018-01-01

    Center of Pressure Excursion Index (CPEI), a parameter computed from the distribution of plantar pressures during stance phase of barefoot walking, has been used to assess dynamic foot function. The original custom program developed to calculate CPEI required the oversight of a user who could manually correct for certain exceptions to the computational rules. A new fully automatic program has been developed to calculate CPEI with an algorithm that accounts for these exceptions. The purpose of this paper is to compare resulting CPEI values computed by these two programs on plantar pressure data from both asymptomatic and pathologic subjects. If comparable, the new program offers significant benefits-reduced potential for variability due to rater discretion and faster CPEI calculation. CPEI values were calculated from barefoot plantar pressure distributions during comfortable paced walking on 61 healthy asymptomatic adults, 19 diabetic adults with moderate hallux valgus, and 13 adults with mild hallux valgus. Right foot data for each subject was analyzed with linear regression and a Bland-Altman plot. The automated algorithm yielded CPEI values that were linearly related to the original program (R 2 =0.99; Pcomputation methods. Results of this analysis suggest that the new automated algorithm may be used to calculate CPEI on both healthy and pathologic feet. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Computer technology and computer programming research and strategies

    CERN Document Server

    Antonakos, James L

    2011-01-01

    Covering a broad range of new topics in computer technology and programming, this volume discusses encryption techniques, SQL generation, Web 2.0 technologies, and visual sensor networks. It also examines reconfigurable computing, video streaming, animation techniques, and more. Readers will learn about an educational tool and game to help students learn computer programming. The book also explores a new medical technology paradigm centered on wireless technology and cloud computing designed to overcome the problems of increasing health technology costs.

  18. Senior Computational Scientist | Center for Cancer Research

    Science.gov (United States)

    The Basic Science Program (BSP) pursues independent, multidisciplinary research in basic and applied molecular biology, immunology, retrovirology, cancer biology, and human genetics. Research efforts and support are an integral part of the Center for Cancer Research (CCR) at the Frederick National Laboratory for Cancer Research (FNLCR). The Cancer & Inflammation Program (CIP),

  19. Software package as an information center product

    International Nuclear Information System (INIS)

    Butler, M.K.

    1977-01-01

    The Argonne Code Center serves as a software exchange and information center for the U.S. Energy Research and Development Administration and the Nuclear Regulatory Commission. The goal of the Center's program is to provide a means for sharing of software among agency offices and contractors, and for transferring computing applications and technology, developed within the agencies, to the information-processing community. A major activity of the Code Center is the acquisition, review, testing, and maintenance of a collection of software--computer systems, applications programs, subroutines, modules, and data compilations--prepared by agency offices and contractors to meet programmatic needs. A brief review of the history of computer program libraries and software sharing is presented to place the Code Center activity in perspective. The state-of-the-art discussion starts off with an appropriate definition of the term software package, together with descriptions of recommended package contents and the Carter's package evaluation activity. An effort is made to identify the various users of the product, to enumerate their individual needs, to document the Center's efforts to meet these needs and the ongoing interaction with the user community. Desirable staff qualifications are considered, and packaging problems, reviewed. The paper closes with a brief look at recent developments and a forecast of things to come. 2 tables

  20. Rethinking Human-Centered Computing: Finding the Customer and Negotiated Interactions at the Airport

    Science.gov (United States)

    Wales, Roxana; O'Neill, John; Mirmalek, Zara

    2003-01-01

    The breakdown in the air transportation system over the past several years raises an interesting question for researchers: How can we help improve the reliability of airline operations? In offering some answers to this question, we make a statement about Huuman-Centered Computing (HCC). First we offer the definition that HCC is a multi-disciplinary research and design methodology focused on supporting humans as they use technology by including cognitive and social systems, computational tools and the physical environment in the analysis of organizational systems. We suggest that a key element in understanding organizational systems is that there are external cognitive and social systems (customers) as well as internal cognitive and social systems (employees) and that they interact dynamically to impact the organization and its work. The design of human-centered intelligent systems must take this outside-inside dynamic into account. In the past, the design of intelligent systems has focused on supporting the work and improvisation requirements of employees but has often assumed that customer requirements are implicitly satisfied by employee requirements. Taking a customer-centric perspective provides a different lens for understanding this outside-inside dynamic, the work of the organization and the requirements of both customers and employees In this article we will: 1) Demonstrate how the use of ethnographic methods revealed the important outside-inside dynamic in an airline, specifically the consequential relationship between external customer requirements and perspectives and internal organizational processes and perspectives as they came together in a changing environment; 2) Describe how taking a customer centric perspective identifies places where the impact of the outside-inside dynamic is most critical and requires technology that can be adaptive; 3) Define and discuss the place of negotiated interactions in airline operations, identifying how these

  1. Effort-reward imbalance and one-year change in neck-shoulder and upper extremity pain among call center computer operators.

    Science.gov (United States)

    Krause, Niklas; Burgel, Barbara; Rempel, David

    2010-01-01

    The literature on psychosocial job factors and musculoskeletal pain is inconclusive in part due to insufficient control for confounding by biomechanical factors. The aim of this study was to investigate prospectively the independent effects of effort-reward imbalance (ERI) at work on regional musculoskeletal pain of the neck and upper extremities of call center operators after controlling for (i) duration of computer use both at work and at home, (ii) ergonomic workstation design, (iii) physical activities during leisure time, and (iv) other individual worker characteristics. This was a one-year prospective study among 165 call center operators who participated in a randomized ergonomic intervention trial that has been described previously. Over an approximate four-week period, we measured ERI and 28 potential confounders via a questionnaire at baseline. Regional upper-body pain and computer use was measured by weekly surveys for up to 12 months following the implementation of ergonomic interventions. Regional pain change scores were calculated as the difference between average weekly pain scores pre- and post intervention. A significant relationship was found between high average ERI ratios and one-year increases in right upper-extremity pain after adjustment for pre-intervention regional mean pain score, current and past physical workload, ergonomic workstation design, and anthropometric, sociodemographic, and behavioral risk factors. No significant associations were found with change in neck-shoulder or left upper-extremity pain. This study suggests that ERI predicts regional upper-extremity pain in -computer operators working >or=20 hours per week. Control for physical workload and ergonomic workstation design was essential for identifying ERI as a risk factor.

  2. Magnetic-fusion energy and computers

    International Nuclear Information System (INIS)

    Killeen, J.

    1982-01-01

    The application of computers to magnetic fusion energy research is essential. In the last several years the use of computers in the numerical modeling of fusion systems has increased substantially. There are several categories of computer models used to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies are also in use. To meet the needs of the fusion program, the National Magnetic Fusion Energy Computer Center has been established at the Lawrence Livermore National Laboratory. A large central computing facility is linked to smaller computer centers at each of the major MFE laboratories by a communication network. In addition to providing cost effective computing services, the NMFECC environment stimulates collaboration and the sharing of computer codes among the various fusion research groups

  3. Magnetic fusion energy and computers

    International Nuclear Information System (INIS)

    Killeen, J.

    1982-01-01

    The application of computers to magnetic fusion energy research is essential. In the last several years the use of computers in the numerical modeling of fusion systems has increased substantially. There are several categories of computer models used to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies are also in use. To meet the needs of the fusion program, the National Magnetic Fusion Energy Computer Center has been established at the Lawrence Livermore National Laboratory. A large central computing facility is linked to smaller computer centers at each of the major MFE laboratories by a communication network. In addition to providing cost effective computing services, the NMFECC environment stimulates collaboration and the sharing of computer codes among the various fusion research groups

  4. Guide to making time-lapse graphics using the facilities of the National Magnetic Fusion Energy Computing Center

    International Nuclear Information System (INIS)

    Munro, J.K. Jr.

    1980-05-01

    The advent of large, fast computers has opened the way to modeling more complex physical processes and to handling very large quantities of experimental data. The amount of information that can be processed in a short period of time is so great that use of graphical displays assumes greater importance as a means of displaying this information. Information from dynamical processes can be displayed conveniently by use of animated graphics. This guide presents the basic techniques for generating black and white animated graphics, with consideration of aesthetic, mechanical, and computational problems. The guide is intended for use by someone who wants to make movies on the National Magnetic Fusion Energy Computing Center (NMFECC) CDC-7600. Problems encountered by a geographically remote user are given particular attention. Detailed information is given that will allow a remote user to do some file checking and diagnosis before giving graphics files to the system for processing into film in order to spot problems without having to wait for film to be delivered. Source listings of some useful software are given in appendices along with descriptions of how to use it. 3 figures, 5 tables

  5. Annual report of R and D activities in Center for Computational Science and e-Systems from April 1, 2006 to March 31, 2007

    International Nuclear Information System (INIS)

    2008-03-01

    This report provides an overview of the research and development activities of the Center for Computational Science and e-Systems (CCSE), JAEA in fiscal year 2006 (April 1, 2006 - March 31, 2007). These research and development activities have been performed by the Simulation Technology Research and Development Office and the Computer Science Research and Development Office. The primary results of the research and development activities are the development of simulation techniques for a virtual earthquake testbed, an intelligent infrastructure for atomic energy research, computational biological disciplines to predict DNA repair function of protein, and material models for a neutron detection device, crack propagation, and gas bubble formation in nuclear fuel. (author)

  6. Fluid dynamics parallel computer development at NASA Langley Research Center

    Science.gov (United States)

    Townsend, James C.; Zang, Thomas A.; Dwoyer, Douglas L.

    1987-01-01

    To accomplish more detailed simulations of highly complex flows, such as the transition to turbulence, fluid dynamics research requires computers much more powerful than any available today. Only parallel processing on multiple-processor computers offers hope for achieving the required effective speeds. Looking ahead to the use of these machines, the fluid dynamicist faces three issues: algorithm development for near-term parallel computers, architecture development for future computer power increases, and assessment of possible advantages of special purpose designs. Two projects at NASA Langley address these issues. Software development and algorithm exploration is being done on the FLEX/32 Parallel Processing Research Computer. New architecture features are being explored in the special purpose hardware design of the Navier-Stokes Computer. These projects are complementary and are producing promising results.

  7. Benefits Analysis of Multi-Center Dynamic Weather Routes

    Science.gov (United States)

    Sheth, Kapil; McNally, David; Morando, Alexander; Clymer, Alexis; Lock, Jennifer; Petersen, Julien

    2014-01-01

    Dynamic weather routes are flight plan corrections that can provide airborne flights more than user-specified minutes of flying-time savings, compared to their current flight plan. These routes are computed from the aircraft's current location to a flight plan fix downstream (within a predefined limit region), while avoiding forecasted convective weather regions. The Dynamic Weather Routes automation has been continuously running with live air traffic data for a field evaluation at the American Airlines Integrated Operations Center in Fort Worth, TX since July 31, 2012, where flights within the Fort Worth Air Route Traffic Control Center are evaluated for time savings. This paper extends the methodology to all Centers in United States and presents benefits analysis of Dynamic Weather Routes automation, if it was implemented in multiple airspace Centers individually and concurrently. The current computation of dynamic weather routes requires a limit rectangle so that a downstream capture fix can be selected, preventing very large route changes spanning several Centers. In this paper, first, a method of computing a limit polygon (as opposed to a rectangle used for Fort Worth Center) is described for each of the 20 Centers in the National Airspace System. The Future ATM Concepts Evaluation Tool, a nationwide simulation and analysis tool, is used for this purpose. After a comparison of results with the Center-based Dynamic Weather Routes automation in Fort Worth Center, results are presented for 11 Centers in the contiguous United States. These Centers are generally most impacted by convective weather. A breakdown of individual Center and airline savings is presented and the results indicate an overall average savings of about 10 minutes of flying time are obtained per flight.

  8. Relative Lyapunov Center Bifurcations

    DEFF Research Database (Denmark)

    Wulff, Claudia; Schilder, Frank

    2014-01-01

    Relative equilibria (REs) and relative periodic orbits (RPOs) are ubiquitous in symmetric Hamiltonian systems and occur, for example, in celestial mechanics, molecular dynamics, and rigid body motion. REs are equilibria, and RPOs are periodic orbits of the symmetry reduced system. Relative Lyapunov...... center bifurcations are bifurcations of RPOs from REs corresponding to Lyapunov center bifurcations of the symmetry reduced dynamics. In this paper we first prove a relative Lyapunov center theorem by combining recent results on the persistence of RPOs in Hamiltonian systems with a symmetric Lyapunov...... center theorem of Montaldi, Roberts, and Stewart. We then develop numerical methods for the detection of relative Lyapunov center bifurcations along branches of RPOs and for their computation. We apply our methods to Lagrangian REs of the N-body problem....

  9. Building a cluster computer for the computing grid of tomorrow

    International Nuclear Information System (INIS)

    Wezel, J. van; Marten, H.

    2004-01-01

    The Grid Computing Centre Karlsruhe takes part in the development, test and deployment of hardware and cluster infrastructure, grid computing middleware, and applications for particle physics. The construction of a large cluster computer with thousands of nodes and several PB data storage capacity is a major task and focus of research. CERN based accelerator experiments will use GridKa, one of only 8 world wide Tier-1 computing centers, for its huge computer demands. Computing and storage is provided already for several other running physics experiments on the exponentially expanding cluster. (orig.)

  10. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    Energy Technology Data Exchange (ETDEWEB)

    Kostadin, Damevski [Virginia State Univ., Petersburg, VA (United States)

    2015-01-25

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  11. Production Support Flight Control Computers: Research Capability for F/A-18 Aircraft at Dryden Flight Research Center

    Science.gov (United States)

    Carter, John F.

    1997-01-01

    NASA Dryden Flight Research Center (DFRC) is working with the United States Navy to complete ground testing and initiate flight testing of a modified set of F/A-18 flight control computers. The Production Support Flight Control Computers (PSFCC) can give any fleet F/A-18 airplane an in-flight, pilot-selectable research control law capability. NASA DFRC can efficiently flight test the PSFCC for the following four reasons: (1) Six F/A-18 chase aircraft are available which could be used with the PSFCC; (2) An F/A-18 processor-in-the-loop simulation exists for validation testing; (3) The expertise has been developed in programming the research processor in the PSFCC; and (4) A well-defined process has been established for clearing flight control research projects for flight. This report presents a functional description of the PSFCC. Descriptions of the NASA DFRC facilities, PSFCC verification and validation process, and planned PSFCC projects are also provided.

  12. Final Report. Center for Scalable Application Development Software

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [Rice Univ., Houston, TX (United States)

    2014-10-26

    The Center for Scalable Application Development Software (CScADS) was established as a part- nership between Rice University, Argonne National Laboratory, University of California Berkeley, University of Tennessee – Knoxville, and University of Wisconsin – Madison. CScADS pursued an integrated set of activities with the aim of increasing the productivity of DOE computational scientists by catalyzing the development of systems software, libraries, compilers, and tools for leadership computing platforms. Principal Center activities were workshops to engage the research community in the challenges of leadership computing, research and development of open-source software, and work with computational scientists to help them develop codes for leadership computing platforms. This final report summarizes CScADS activities at Rice University in these areas.

  13. An Audit on the Appropriateness of Coronary Computed Tomography Angiography Referrals in a Tertiary Cardiac Center.

    Science.gov (United States)

    Alderazi, Ahmed Ali; Lynch, Mary

    2017-01-01

    In response to growing concerns regarding the overuse of coronary computed tomography angiography (CCTA) in the clinical setting, multiple societies, including the American College of Cardiology Foundation, have jointly published revised criteria regarding the appropriate use of this imaging modality. However, previous research indicates significant discrepancies in the rate of adherence to these guidelines. To assess the appropriateness of CCTA referrals in a tertiary cardiac center in Bahrain. This retrospective clinical audit examined the records of patients referred to CCTA between the April 1, 2015 and December 31, 2015 in Mohammed bin Khalifa Cardiac Center. Using information from medical records, each case was meticulously audited against guidelines to categorize it as appropriate, inappropriate, or uncertain. Of the 234 records examined, 176 (75.2%) were appropriate, 47 (20.1%) were uncertain, and 11 (4.7%) were inappropriate. About 74.4% of all referrals were to investigate coronary artery disease (CAD). The most common indication that was deemed appropriate was the detection of CAD in the setting of suspected ischemic equivalent in patients with an intermediate pretest probability of CAD (65.9%). Most referrals deemed inappropriate were requested to detect CAD in asymptomatic patients at low or intermediate risk of CAD (63.6%). This audit demonstrates a relatively low rate of inappropriate CCTA referrals, indicating the appropriate and efficient use of this resource in the Mohammed bin Khalifa Cardiac Center. Agreement on and reclassification of "uncertain" cases by guideline authorities would facilitate a deeper understanding of referral appropriateness.

  14. Cone-beam Computed Tomographic Assessment of Canal Centering Ability and Transportation after Preparation with Twisted File and Bio RaCe Instrumentation.

    Directory of Open Access Journals (Sweden)

    Kiamars Honardar

    2014-08-01

    Full Text Available Use of rotary Nickel-Titanium (NiTi instruments for endodontic preparation has introduced a new era in endodontic practice, but this issue has undergone dramatic modifications in order to achieve improved shaping abilities. Cone-beam computed tomography (CBCT has made it possible to accurately evaluate geometrical changes following canal preparation. This study was carried out to compare canal centering ability and transportation of Twisted File and BioRaCe rotary systems by means of cone-beam computed tomography.Thirty root canals from freshly extracted mandibular and maxillary teeth were selected. Teeth were mounted and scanned before and after preparation by CBCT at different apical levels. Specimens were divided into 2 groups of 15. In the first group Twisted File and in the second, BioRaCe was used for canal preparation. Canal transportation and centering ability after preparation were assessed by NNT Viewer and Photoshop CS4 software. Statistical analysis was performed using t-test and two-way ANOVA.All samples showed deviations from the original axes of the canals. No significant differences were detected between the two rotary NiTi instruments for canal centering ability in all sections. Regarding canal transportation however, a significant difference was seen in the BioRaCe group at 7.5mm from the apex.Under the conditions of this in vitro study, Twisted File and BioRaCe rotary NiTi files retained original canal geometry.

  15. International Conference of Intelligence Computation and Evolutionary Computation ICEC 2012

    CERN Document Server

    Intelligence Computation and Evolutionary Computation

    2013-01-01

    2012 International Conference of Intelligence Computation and Evolutionary Computation (ICEC 2012) is held on July 7, 2012 in Wuhan, China. This conference is sponsored by Information Technology & Industrial Engineering Research Center.  ICEC 2012 is a forum for presentation of new research results of intelligent computation and evolutionary computation. Cross-fertilization of intelligent computation, evolutionary computation, evolvable hardware and newly emerging technologies is strongly encouraged. The forum aims to bring together researchers, developers, and users from around the world in both industry and academia for sharing state-of-art results, for exploring new areas of research and development, and to discuss emerging issues facing intelligent computation and evolutionary computation.

  16. Activity-based computing: computational management of activities reflecting human intention

    DEFF Research Database (Denmark)

    Bardram, Jakob E; Jeuris, Steven; Houben, Steven

    2015-01-01

    paradigm that has been applied in personal information management applications as well as in ubiquitous, multidevice, and interactive surface computing. ABC has emerged as a response to the traditional application- and file-centered computing paradigm, which is oblivious to a notion of a user’s activity...

  17. [Recommendations for the Stepwise Occupational Reintegration: Can the Characteristic of the Patients Explain the Differences Between the Rehabilitation Centers?].

    Science.gov (United States)

    Schmid, L; Jankowiak, S; Kaluscha, R; Krischak, G

    2016-06-01

    The first step to initiate a stepwise occupational reintegration (SOR) is the recommendation of the rehabilitation centers. Therefore rehabilitation centers have a significant impact on the use of SOR. There is evidence that the recommendation rate between the rehabilitation centers differs clearly. The present survey therefore analyses in detail the differences of the recommendation rate and examines which patient-related factors could explain the differences. This study is based on analysis of routine data provided by the German pension insurance in Baden-Württemberg (Rehabilitationsstatistikdatenbasis 2013; RSD). In the analyses rehabilitation measures were included if they were conducted by employed patients (18-64 years) with a muscular-skeletal system disease or a disorder of the connective tissue. Logistic regression models were performed to explain the differences in the recommendation rate of the rehabilitation centers. The data of 134 853 rehabilitation measures out of 32 rehabilitation centers were available. The recommendation rate differed between the rehabilitation centers from 1.36-18.53%. The logistic regression analysis showed that the period of working incapacity 12 month before the rehabilitation and the working capacity on the current job were the most important predictors for the recommendation of a SOR by the rehabilitation centers. Also the rehabilitation centers themselves have an important influence. The results of this survey indicate that the characteristic of the patients is an important factor for the recommendation of SOR. Additionally the rehabilitation centers themselves have an influence on the recommendation of SOR. The results point to the fact that the rehabilitation centers use different criteria by making a recommendation. © Georg Thieme Verlag KG Stuttgart · New York.

  18. Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Anisenkov, A; Belov, S; Kaplin, V; Korol, A; Skovpen, K; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2012-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.

  19. Handbook on data centers

    CERN Document Server

    Khan, Samee Ullah

    2015-01-01

    This handbook offers a comprehensive review of the state-of-the-art research achievements in the field of data centers. Contributions from international, leading researchers and scholars offer topics in cloud computing, virtualization in data centers, energy efficient data centers, and next generation data center architecture.  It also comprises current research trends in emerging areas, such as data security, data protection management, and network resource management in data centers. Specific attention is devoted to industry needs associated with the challenges faced by data centers, such as various power, cooling, floor space, and associated environmental health and safety issues, while still working to support growth without disrupting quality of service. The contributions cut across various IT data technology domains as a single source to discuss the interdependencies that need to be supported to enable a virtualized, next-generation, energy efficient, economical, and environmentally friendly data cente...

  20. Networking at NASA. Johnson Space Center

    Science.gov (United States)

    Garman, John R.

    1991-01-01

    A series of viewgraphs on computer networks at the Johnson Space Center (JSC) are given. Topics covered include information resource management (IRM) at JSC, the IRM budget by NASA center, networks evolution, networking as a strategic tool, the Information Services Directorate charter, and SSC network requirements, challenges, and status.

  1. Annual report of R and D activities in center for promotion of computational science and engineering from April 1, 2004 to March 31, 2005

    International Nuclear Information System (INIS)

    2005-09-01

    This report provides an overview of research and development activities in Center for Promotion of Computational Science and Engineering (CCSE), JAERI, in the fiscal year 2004 (April 1, 2004 - March 31, 2005). The activities have been performed by Research Group for Computational Science in Atomic Energy, Research Group for Computational Material Science in Atomic Energy, R and D Group for Computer Science, R and D Group for Numerical Experiments, and Quantum Bioinformatics Group in CCSE. The ITBL (Information Technology Based Laboratory) project is performed mainly by the R and D Group for Computer Science and the Research Group for Computational Science in Atomic Energy. According to the mid-term evaluation for the ITBL project conducted by the MEXT, the achievement of the ITBL infrastructure software developed by JAERI has been remarked as outstanding at the 13th Information Science and Technology Committee in the Subdivision on R and D Planning and Evaluation of the Council for Science and Technology on April 26th, 2004. (author)

  2. [Feasibility and acceptance of computer-based assessment for the identification of psychosocially distressed patients in routine clinical care].

    Science.gov (United States)

    Sehlen, Susanne; Ott, Martin; Marten-Mittag, Birgitt; Haimerl, Wolfgang; Dinkel, Andreas; Duehmke, Eckhart; Klein, Christian; Schaefer, Christof; Herschbach, Peter

    2012-07-01

    This study investigated feasibility and acceptance of computer-based assessment for the identification of psychosocial distress in routine radiotherapy care. 155 cancer patients were assessed using QSC-R10, PO-Bado-SF and Mach-9. The congruence between computerized tablet PC and conventional paper assessment was analysed in 50 patients. The agreement between the 2 modes was high (ICC 0.869-0.980). Acceptance of computer-based assessment was very high (>95%). Sex, age, education, distress and Karnofsky performance status (KPS) did not influence acceptance. Computerized assessment was rated more difficult by older patients (p = 0.039) and patients with low KPS (p = 0.020). 75.5% of the respondents supported referral for psycho-social intervention for distressed patients. The prevalence of distress was 27.1% (QSC-R10). Computer-based assessment allows easy identification of distressed patients. Level of staff involvement is low, and the results are quickly available for care providers. © Georg Thieme Verlag KG Stuttgart · New York.

  3. Establishment of computed tomography reference dose levels in Onassis Cardiac Surgery Center

    International Nuclear Information System (INIS)

    Tsapaki, V.; Kyrozi, E.; Syrigou, T.; Mastorakou, I.; Kottou, S.

    2001-01-01

    The purpose of the study was to apply European Commission (EC) Reference Dose Levels (RDL) in Computed Tomography (CT) examinations at Onassis Cardiac Surgery Center (OCSC). These are weighted CT Dose Index (CTDI w ) for a single slice and Dose-Length Product (DLP) for a complete examination. During the period 1998-1999, the total number of CT examinations, every type of CT examination, patient related data and technical parameters of the examinations were recorded. The most frequent examinations were chosen for investigation which were the head, chest, abdomen and pelvis. CTDI measurements were performed and CTDI w and DLP were calculated. Third Quartile values of CTDI w were chosen to be 43mGy for head, 8mGy for chest, and 22mGy for abdomen and pelvis examinations. Third quartile values of DLP were chosen to be 740mGycm for head, 370mGycm for chest, 490mGycm for abdomen and 420mGycm for pelvis examination. Results confirm that OCSC follows successfully the proposed RDL for the head, chest, abdomen and pelvis examinations in terms of radiation dose. (author)

  4. Electricity Infrastructure Operations Center (EIOC)

    Data.gov (United States)

    Federal Laboratory Consortium — The Electricity Infrastructure Operations Center (EIOC) at PNNL brings together industry-leading software, real-time grid data, and advanced computation into a fully...

  5. USSR Report, Cybernetics Computers and Automation Technology

    Science.gov (United States)

    1985-09-05

    organization, the SKALD program utilizes a dictionary or data base to generate SKALD poetry at the computer center of Minsk State Pedagogical ...wonderful capabilities at the^ Krasnoyarsk branch of the USSR AN [Academy of Sciences] Siberian section’s Computer Center. They began training the kids

  6. High-End Scientific Computing

    Science.gov (United States)

    EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.

  7. Performance indicators for call centers with impatience

    NARCIS (Netherlands)

    Jouini, O.; Koole, G.M.; Roubos, A.

    2013-01-01

    An important feature of call center modeling is the presence of impatient customers. This article considers single-skill call centers including customer abandonments. A number of different service-level definitions are structured, including all those used in practice, and the explicit computation of

  8. First Ph.D. Student Workshop of the Hermann von Helmholtz Association of National Research Centers (HGF) on ''Nuclear Safety Research''

    International Nuclear Information System (INIS)

    Knebel, J.U.; Sanchez Espinoza, V.H.

    2006-03-01

    The First Ph.D. Student Workshop ''Nuclear Safety Research'' of the Helmholtz Association of National Research Centers (HGF)'' was jointly organized by the Research Center Karlsruhe GmbH and the Energie Baden-Wuerttemberg AG (EnBW) from Wednesday 9th to Friday 11th March 2005. The workshop was opened with welcome greetings by Dr. Peter Fritz, Forschungszentrum Karlsruhe. Subsequently Dr. Joachim U. Knebel explained the main goals and the content of the workshop. The young scientists reported in 28 high-level presentations about their research work which covered a wide spectrum from reactor safety, partitions and transmutation, and innovative reactor systems, to safety research for nuclear waste disposal. The junior researchs showed excellent professional competence and demonstrated presentation qualities at the highest level. The successful funding of two Virtual Institutes, namely: the ''Competence in Nuclear Technologies'' and ''Functional Characteristics of Aquatic Interfaces both co-ordinated by Forschungszentrum Karlsruhe'', by the President of the Helmholtz Association Prof. Walter Kroell was the motivation for the organization of this first Ph.D. Student Workshop. Thanks to these two Virtual Institutes, the Reseach Center Karlsruhe and Juelich together with several univer-sities i.e. RWTH Aachen, Heidelberg, Karlsruhe, Muenster, and Stuttgart, have successfully financed eight Ph.D. and two post-doctoral students. Moreover, young scientists of the European Institute for Transuranium Elements (ITU) and additional seven Ph.D. Students, who are sponsored by the German nuclear industry (Framatome ANP, RWE Power, EnBW) in the frame of the Alliance Competence in on Nuclear Technology, and who are trained at Forschungszentrum Karlsruhe, actively contributed to this workshop. The EnBW-Award was handed over by Dr. Hans-Josef Zimmer, member of the board of directors of the EnBW-Kraftwerksgesellschaft, to Mrs. Ayelet Walter from the University of Stuttgart for the best

  9. Datacenter Changes vs. Employment Rates for Datacenter Managers In the Cloud Computing Era

    OpenAIRE

    Mirzoev, Timur; Benson, Bruce; Hillhouse, David; Lewis, Mickey

    2014-01-01

    Due to the evolving Cloud Computing paradigm, there is a prevailing concern that in the near future data center managers may be in short supply. Cloud computing, as a whole, is becoming more prevalent into today s computing world. In fact, cloud computing has become so popular that some are now referring to data centers as cloud centers. How does this interest in cloud computing translate into employment rates for data center managers? The popularity of the public and private cloud models are...

  10. User perspectives on computer applications

    International Nuclear Information System (INIS)

    Trammell, H.E.

    1979-04-01

    Experiences of a technical group that uses the services of computer centers are recounted. An orientation on the ORNL Engineering Technology Division and its missions is given to provide background on the diversified efforts undertaken by the Division and its opportunities to benefit from computer technology. Specific ways in which computers are used within the Division are described; these include facility control, data acquisition, data analysis, theory applications, code development, information processing, cost control, management of purchase requisitions, maintenance of personnel information, and control of technical publications. Problem areas found to need improvement are the overloading of computers during normal working hours, lack of code transportability, delay in obtaining routine programming, delay in key punching services, bewilderment in the use of large computer centers, complexity of job control language, and uncertain quality of software. 20 figures

  11. Computer-assisted optimization of chest fluoroscopy

    International Nuclear Information System (INIS)

    Korolyuk, I.P.; Filippova, N.V.; Kirillov, L.P.; Momsenko, S.F.

    1987-01-01

    The main trends in the use of computer for the optimization of chest fluorography among employees and workers of a large industrial enterprise are considered. The following directions were determined: automatted sorting of fluorograms, formalization of X-ray signs in describing fluorograms, organization of a special system of fluorographic data management. Four levels of algorithms to solve the problems of fluorography were considered: 1) shops, personnel department, etc.; 2) an automated center for mass screening and a medical unit; 3) a computer center and 4) planning and management service. The results of computer use over a 3-year period were analyzed. The efficacy of computer was shown

  12. Computational sustainability

    CERN Document Server

    Kersting, Kristian; Morik, Katharina

    2016-01-01

    The book at hand gives an overview of the state of the art research in Computational Sustainability as well as case studies of different application scenarios. This covers topics such as renewable energy supply, energy storage and e-mobility, efficiency in data centers and networks, sustainable food and water supply, sustainable health, industrial production and quality, etc. The book describes computational methods and possible application scenarios.

  13. Design and analysis of a tendon-based computed tomography-compatible robot with remote center of motion for lung biopsy.

    Science.gov (United States)

    Yang, Yunpeng; Jiang, Shan; Yang, Zhiyong; Yuan, Wei; Dou, Huaisu; Wang, Wei; Zhang, Daguang; Bian, Yuan

    2017-04-01

    Nowadays, biopsy is a decisive method of lung cancer diagnosis, whereas lung biopsy is time-consuming, complex and inaccurate. So a computed tomography-compatible robot for rapid and precise lung biopsy is developed in this article. According to the actual operation process, the robot is divided into two modules: 4-degree-of-freedom position module for location of puncture point is appropriate for patient's almost all positions and 3-degree-of-freedom tendon-based orientation module with remote center of motion is compact and computed tomography-compatible to orientate and insert needle automatically inside computed tomography bore. The workspace of the robot surrounds patient's thorax, and the needle tip forms a cone under patient's skin. A new error model of the robot based on screw theory is proposed in view of structure error and actuation error, which are regarded as screw motions. Simulation is carried out to verify the precision of the error model contrasted with compensation via inverse kinematics. The results of insertion experiment on specific phantom prove the feasibility of the robot with mean error of 1.373 mm in laboratory environment, which is accurate enough to replace manual operation.

  14. Eustatic control on epicontinental basins: The example of the Stuttgart Formation in the Central European Basin (Middle Keuper, Late Triassic)

    Science.gov (United States)

    Franz, M.; Nowak, K.; Berner, U.; Heunisch, C.; Bandel, K.; Röhling, H.-G.; Wolfgramm, M.

    2014-11-01

    The deposition of the Stuttgart Formation ('Schilfsandstein'), commonly considered as a type-example of the Carnian Pluvial Event, was controlled by high frequent 4th order sequences that resulted in pre-, intra- and post-Schilfsandstein transgressions from Tethyan waters into the epicontinental Central European Basin (CEB). The pre-Schilfsandstein transgression flooded the CEB trough gates to the Southeast and resulted in a wide-spread inland sea that was characterised by increased biological productivity, predominantly oxic conditions and enabled the immigration of euryhaline marine fauna with plankton, ostracodes, fishes, bivalves and the gastropods Omphaloptychia suebica n. sp. and Settsassia stuttgartica n. sp. The rather short-term intra- and post-Schilfsandstein transgressions flooded the CEB from the Southwest and Southeast and established a shallow brackish inland sea that stretched up to North Germany. Both, the 4th and 3rd order sequences derived from the succession in the CEB correlate well with those derived from successions of Tethyan shelfs. Therefore pronounced circum-Tethyan eustatic cycles are evidenced and may have had considerable impact on prominent middle Carnian events: Reingraben turnover, Carnian Pluvial Event, Carnian Crisis and Mid Carnian Wet Intermezzo. The broad circum-Tethyan evidence of 106-year scale cycles suggests glacioeustatic sea-level changes even in the Triassic Greenhouse period.

  15. McMaster University: College and University Computing Environment.

    Science.gov (United States)

    CAUSE/EFFECT, 1988

    1988-01-01

    The computing and information services (CIS) organization includes administrative computing, academic computing, and networking and has three divisions: computing services, development services, and information services. Other computing activities include Health Sciences, Humanities Computing Center, and Department of Computer Science and Systems.…

  16. Root Canal Transportation and Centering Ability of Nickel-Titanium Rotary Instruments in Mandibular Premolars Assessed Using Cone-Beam Computed Tomography.

    Science.gov (United States)

    Mamede-Neto, Iussif; Borges, Alvaro Henrique; Guedes, Orlando Aguirre; de Oliveira, Durvalino; Pedro, Fábio Luis Miranda; Estrela, Carlos

    2017-01-01

    The aim of this study was to evaluate, using cone-beam computed tomography (CBCT), transportation and centralization of different nickel-titanium (NiTi) rotary instruments. One hundred and twenty eight mandibular premolars were selected and instrumented using the following brands of NiTi files: WaveOne, WaveOne Gold, Reciproc, ProTaper Next, ProTaper Gold, Mtwo, BioRaCe and RaCe. CBCT imaging was performed before and after root canal preparation to obtain measurements of mesial and distal dentin walls and calculations of root canal transportation and centralization. A normal distribution of data was confirmed by the Kolmogorov-Smirnov and Levene tests, and results were assessed using the Kruskal-Wallis test. Statistical significance was set at 5%. ProTaper Gold produced the lowest canal transportation values, and RaCe, the highest. ProTaper Gold files also showed the highest values for centering ability, whereas BioRaCe showed the lowest. No significant differences were found across the different instruments in terms of canal transportation and centering ability (P > 0.05). Based on the methodology employed, all instruments used for root canal preparation of mandibular premolars performed similarly with regard to canal transportation and centering ability.

  17. Computer systems and software engineering

    Science.gov (United States)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  18. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  19. Computer applications in controlled fusion research

    International Nuclear Information System (INIS)

    Killeen, J.

    1975-01-01

    The application of computers to controlled thermonuclear research (CTR) is essential. In the near future the use of computers in the numerical modeling of fusion systems should increase substantially. A recent panel has identified five categories of computational models to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies is called for. The development and application of computer codes to implement these models is a vital step in reaching the goal of fusion power. To meet the needs of the fusion program the National CTR Computer Center has been established at the Lawrence Livermore Laboratory. A large central computing facility is linked to smaller computing centers at each of the major CTR Laboratories by a communication network. The crucial element needed for success is trained personnel. The number of people with knowledge of plasma science and engineering trained in numerical methods and computer science must be increased substantially in the next few years. Nuclear engineering departments should encourage students to enter this field and provide the necessary courses and research programs in fusion computing

  20. A large-scale computer facility for computational aerodynamics

    International Nuclear Information System (INIS)

    Bailey, F.R.; Balhaus, W.F.

    1985-01-01

    The combination of computer system technology and numerical modeling have advanced to the point that computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. To provide for further advances in modeling of aerodynamic flow fields, NASA has initiated at the Ames Research Center the Numerical Aerodynamic Simulation (NAS) Program. The objective of the Program is to develop a leading-edge, large-scale computer facility, and make it available to NASA, DoD, other Government agencies, industry and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. The Program will establish an initial operational capability in 1986 and systematically enhance that capability by incorporating evolving improvements in state-of-the-art computer system technologies as required to maintain a leadership role. This paper briefly reviews the present and future requirements for computational aerodynamics and discusses the Numerical Aerodynamic Simulation Program objectives, computational goals, and implementation plans

  1. Computer-Based Training in Eating and Nutrition Facilitates Person-Centered Hospital Care: A Group Concept Mapping Study.

    Science.gov (United States)

    Westergren, Albert; Edfors, Ellinor; Norberg, Erika; Stubbendorff, Anna; Hedin, Gita; Wetterstrand, Martin; Rosas, Scott R; Hagell, Peter

    2018-04-01

    Studies have shown that computer-based training in eating and nutrition for hospital nursing staff increased the likelihood that patients at risk of undernutrition would receive nutritional interventions. This article seeks to provide understanding from the perspective of nursing staff of conceptually important areas for computer-based nutritional training, and their relative importance to nutritional care, following completion of the training. Group concept mapping, an integrated qualitative and quantitative methodology, was used to conceptualize important factors relating to the training experiences through four focus groups (n = 43), statement sorting (n = 38), and importance rating (n = 32), followed by multidimensional scaling and cluster analysis. Sorting of 38 statements yielded four clusters. These clusters (number of statements) were as follows: personal competence and development (10), practice close care development (10), patient safety (9), and awareness about the nutrition care process (9). First and second clusters represented "the learning organization," and third and fourth represented "quality improvement." These findings provide a conceptual basis for understanding the importance of training in eating and nutrition, which contributes to a learning organization and quality improvement, and can be linked to and facilitates person-centered nutritional care and patient safety.

  2. PREFACE: 9th World Congress on Computational Mechanics and 4th Asian Pacific Congress on Computational Mechanics

    Science.gov (United States)

    Khalili, N.; Valliappan, S.; Li, Q.; Russell, A.

    2010-07-01

    The use for mathematical models of natural phenomena has underpinned science and engineering for centuries, but until the advent of modern computers and computational methods, the full utility of most of these models remained outside the reach of the engineering communities. Since World War II, advances in computational methods have transformed the way engineering and science is undertaken throughout the world. Today, theories of mechanics of solids and fluids, electromagnetism, heat transfer, plasma physics, and other scientific disciplines are implemented through computational methods in engineering analysis, design, manufacturing, and in studying broad classes of physical phenomena. The discipline concerned with the application of computational methods is now a key area of research, education, and application throughout the world. In the early 1980's, the International Association for Computational Mechanics (IACM) was founded to promote activities related to computational mechanics and has made impressive progress. The most important scientific event of IACM is the World Congress on Computational Mechanics. The first was held in Austin (USA) in 1986 and then in Stuttgart (Germany) in 1990, Chiba (Japan) in 1994, Buenos Aires (Argentina) in 1998, Vienna (Austria) in 2002, Beijing (China) in 2004, Los Angeles (USA) in 2006 and Venice, Italy; in 2008. The 9th World Congress on Computational Mechanics is held in conjunction with the 4th Asian Pacific Congress on Computational Mechanics under the auspices of Australian Association for Computational Mechanics (AACM), Asian Pacific Association for Computational Mechanics (APACM) and International Association for Computational Mechanics (IACM). The 1st Asian Pacific Congress was in Sydney (Australia) in 2001, then in Beijing (China) in 2004 and Kyoto (Japan) in 2007. The WCCM/APCOM 2010 publications consist of a printed book of abstracts given to delegates, along with 247 full length peer reviewed papers published with

  3. Applied technology center business plan and market survey

    Science.gov (United States)

    Hodgin, Robert F.; Marchesini, Roberto

    1990-01-01

    Business plan and market survey for the Applied Technology Center (ATC), computer technology transfer and development non-profit corporation, is presented. The mission of the ATC is to stimulate innovation in state-of-the-art and leading edge computer based technology. The ATC encourages the practical utilization of late-breaking computer technologies by firms of all variety.

  4. 76 FR 14669 - Privacy Act of 1974; CMS Computer Match No. 2011-02; HHS Computer Match No. 1007

    Science.gov (United States)

    2011-03-17

    ... 1974; CMS Computer Match No. 2011-02; HHS Computer Match No. 1007 AGENCY: Department of Health and Human Services (HHS), Centers for Medicare & Medicaid Services (CMS). ACTION: Notice of computer... notice establishes a computer matching agreement between CMS and the Department of Defense (DoD). We have...

  5. The combinatorics computation for Casimir operators of the symplectic Lie algebra and the application for determining the center of the enveloping algebra of a semidirect product

    International Nuclear Information System (INIS)

    Le Van Hop.

    1989-12-01

    The combinatorics computation is used to describe the Casimir operators of the symplectic Lie Algebra. This result is applied for determining the Center of the enveloping Algebra of the semidirect Product of the Heisenberg Lie Algebra and the symplectic Lie Algebra. (author). 10 refs

  6. The use of personal computers in reactor physics

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1988-01-01

    This paper points out that personal computers are now powerful enough (in terms of core size and speed) to allow them to be used for serious reactor physics applications. In addition the low cost of personal computers means that even small institutes can now have access to a significant amount of computer power. At the present time distribution centers, such as RSIC, are beginning to distribute reactor physics codes for use on personal computers; hopefully in the near future more and more of these codes will become available through distribution centers, such as RSIC

  7. Computational atomic and nuclear physics

    International Nuclear Information System (INIS)

    Bottcher, C.; Strayer, M.R.; McGrory, J.B.

    1990-01-01

    The evolution of parallel processor supercomputers in recent years provides opportunities to investigate in detail many complex problems, in many branches of physics, which were considered to be intractable only a few years ago. But to take advantage of these new machines, one must have a better understanding of how the computers organize their work than was necessary with previous single processor machines. Equally important, the scientist must have this understanding as well as a good understanding of the structure of the physics problem under study. In brief, a new field of computational physics is evolving, which will be led by investigators who are highly literate both computationally and physically. A Center for Computationally Intensive Problems has been established with the collaboration of the University of Tennessee Science Alliance, Vanderbilt University, and the Oak Ridge National Laboratory. The objective of this Center is to carry out forefront research in computationally intensive areas of atomic, nuclear, particle, and condensed matter physics. An important part of this effort is the appropriate training of students. An early effort of this Center was to conduct a Summer School of Computational Atomic and Nuclear Physics. A distinguished faculty of scientists in atomic, nuclear, and particle physics gave lectures on the status of present understanding of a number of topics at the leading edge in these fields, and emphasized those areas where computational physics was in a position to make a major contribution. In addition, there were lectures on numerical techniques which are particularly appropriate for implementation on parallel processor computers and which are of wide applicability in many branches of science

  8. DATA CENTER REMODELING FOR THE INTERNET OF THINGS

    Directory of Open Access Journals (Sweden)

    Cristian IVĂNUŞ

    2015-05-01

    Full Text Available Designing efficient data center is more than ever a challenge for many companies when it comes to meet the requirements of having a greater and extensible computing capacity. It should be stored increasing volumes of data. Applications became increasingly complex. The requirements for running business operations should be very flexible. If we want that ICT infrastructure (Information Technology & Communication to be able to offer, continuosly, high level of services, it is essential to rethink the "robustness" and the need for innovation in data centers. This is true for both servers and storage as well as for the processing power. Companies operating in this field help customers to optimize their data center availability and security by evaluating the energy consumption, cooling capacity and other factors involved in data centers upgrading. The objective is to obtain an optimum in terms of occupied space, most efficient power consumption and most effective cooling in order to achieve a sustainable long term data center operation. The development and modernization explosive data centers as a result to the three major trends existent in the IT (Information Technology now: cloud computing, Internet of Things and Big Data [1, 9].

  9. Computer Technology for Industry

    Science.gov (United States)

    1979-01-01

    In this age of the computer, more and more business firms are automating their operations for increased efficiency in a great variety of jobs, from simple accounting to managing inventories, from precise machining to analyzing complex structures. In the interest of national productivity, NASA is providing assistance both to longtime computer users and newcomers to automated operations. Through a special technology utilization service, NASA saves industry time and money by making available already developed computer programs which have secondary utility. A computer program is essentially a set of instructions which tells the computer how to produce desired information or effect by drawing upon its stored input. Developing a new program from scratch can be costly and time-consuming. Very often, however, a program developed for one purpose can readily be adapted to a totally different application. To help industry take advantage of existing computer technology, NASA operates the Computer Software Management and Information Center (COSMIC)(registered TradeMark),located at the University of Georgia. COSMIC maintains a large library of computer programs developed for NASA, the Department of Defense, the Department of Energy and other technology-generating agencies of the government. The Center gets a continual flow of software packages, screens them for adaptability to private sector usage, stores them and informs potential customers of their availability.

  10. USERDA computer program summaries. Numbers 177--239

    International Nuclear Information System (INIS)

    1975-10-01

    Since 1960 the Argonne Code Center has served as a U. S. Atomic Energy Commission information center for computer programs developed and used primarily for the solution of problems in nuclear physics, reactor design, reactor engineering and operation. The Center, through a network of registered installations, collects, validates, maintains, and distributes a library of these computer programs and publishes a compilation of abstracts describing them. In 1972 the scope of the Center's activities was officially expanded to include computer programs developed in all of the U. S. Atomic Energy Commission program areas and the compilation and publication of this report. The Computer Program Summary report contains summaries of computer programs at the specification stage, under development, being checked out, in use, or available at ERDA offices, laboratories, and contractor installations. Programs are divided into the following categories: cross section and resonance integral calculations; spectrum calculations, generation of group constants, lattice and cell problems; static design studies; depletion, fuel management, cost analysis, and reactor economics; space-independent kinetics; space--time kinetics, coupled neutronics--hydrodynamics--thermodynamics and excursion simulations; radiological safety, hazard and accident analysis; heat transfer and fluid flow; deformation and stress distribution computations, structural analysis and engineering design studies; gamma heating and shield design programs; reactor systems analysis; data preparation; data management; subsidiary calculations; experimental data processing; general mathematical and computing system routines; materials; environmental and earth sciences; space sciences; electronics and engineering equipment; chemistry; particle accelerators and high-voltage machines; physics; controlled thermonuclear research; biology and medicine; and data

  11. Computer Operating System Maintenance.

    Science.gov (United States)

    1982-06-01

    FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access

  12. Computation for LHC experiments: a worldwide computing grid

    International Nuclear Information System (INIS)

    Fairouz, Malek

    2010-01-01

    In normal operating conditions the LHC detectors are expected to record about 10 10 collisions each year. The processing of all the consequent experimental data is a real computing challenge in terms of equipment, software and organization: it requires sustaining data flows of a few 10 9 octets per second and recording capacity of a few tens of 10 15 octets each year. In order to meet this challenge a computing network implying the dispatch and share of tasks, has been set. The W-LCG grid (World wide LHC computing grid) is made up of 4 tiers. Tiers 0 is the computer center in CERN, it is responsible for collecting and recording the raw data from the LHC detectors and to dispatch it to the 11 tiers 1. The tiers 1 is typically a national center, it is responsible for making a copy of the raw data and for processing it in order to recover relevant data with a physical meaning and to transfer the results to the 150 tiers 2. The tiers 2 is at the level of the Institute or laboratory, it is in charge of the final analysis of the data and of the production of the simulations. Tiers 3 are at the level of the laboratories, they provide a complementary and local resource to tiers 2 in terms of data analysis. (A.C.)

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  14. Center conditions and limit cycles for BiLienard systems

    Directory of Open Access Journals (Sweden)

    Jaume Gine

    2017-03-01

    Full Text Available In this article we study the center problem for polynomial BiLienard systems of degree n. Computing the focal values and using Grobner bases we find the center conditions for such systems for n=6. We also establish a conjecture about the center conditions for polynomial BiLienard systems of arbitrary degree.

  15. The GLOBE-Consortium: The Erasmus Computing Grid – Building a Super-Computer at Erasmus MC for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    2005-01-01

    textabstractTo meet the enormous computational needs of live-science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing grids in the world – The Erasmus Computing Grid.

  16. Annual report of R and D activities in Center for Computational Science and e-Systems from April 1, 2007 to March 31, 2009

    International Nuclear Information System (INIS)

    2010-01-01

    This report provides an overview of research and development activities in Center for Computational Science and e-Systems (CCSE), JAEA, during the fiscal years 2007 and 2008 (Apr 1, 2007 - March 31, 2009). These research and development activities have been performed by the Simulation Technology R and D Office and Computer Science R and D Office. These activities include development of secure computational infrastructure for atomic energy research based on the grid technology, large scale seismic analysis of an entire nuclear reactor structure, large scale fluid dynamics simulation of J-PARC mercury target, large scale plasma simulation for nuclear fusion reactor, large scale atomic and subatomic simulations of nuclear fuels and materials for safety assessment, large scale quantum simulations of superconductor for the design of new devices and fundamental understanding of superconductivity, development of protein database for the identification of radiation-resistance gene, and large scale atomic simulation of proteins. (author)

  17. Underutilization of information and knowledge in everyday medical practice: Evaluation of a computer-based solution

    Directory of Open Access Journals (Sweden)

    Fritz Peter

    2008-11-01

    Full Text Available Abstract Background The medical history is acknowledged as the sine qua non for quality medical care because recognizing problems is pre-requisite for managing them. Medical histories typically are incomplete and inaccurate, however. We show here that computers are a solution to this issue of information gathering about patients. Computers can be programmed to acquire more complete medical histories with greater detail across a range of acute and chronic issues than physician histories. Methods Histories were acquired by physicians in the usual way and by a computer program interacting directly with patients. Decision-making of what medical issues were queried by computer were made internally by the software, including determination of the chief complaint. The selection of patients was from admissions to the Robert-Bosch-Hospital, Stuttgart, Germany by convenience sampling. Physician-acquired and computer-acquired histories were compared on a patient-by-patient basis for 45 patients. Results The computer histories reported 160 problems not recorded in physician histories or slightly more than 3.5 problems per patient. However, physicians but not the computer reported 13 problems. The data show that computer histories reported problems across a range of organ systems, that the problems detected by computer but not physician histories were both acute and chronic and that the computer histories detected a significant number of issues important for preventing further morbidity. Conclusion A combination of physician and computer-acquired histories, in non-emergent situations, with the latter available to the physician at the time he or she sees the patient, is a far superior method for collecting historical data than the physician interview alone.

  18. Analytic reducibility of nondegenerate centers: Cherkas systems

    Directory of Open Access Journals (Sweden)

    Jaume Giné

    2016-07-01

    where $P_i(x$ are polynomials of degree $n$, $P_0(0=0$ and $P_0'(0 <0$. Computing the focal values we find the center conditions for such systems for degree $3$, and using modular arithmetics for degree $4$. Finally we do a conjecture about the center conditions for Cherkas polynomial differential systems of degree $n$.

  19. [Imaging center - optimization of the imaging process].

    Science.gov (United States)

    Busch, H-P

    2013-04-01

    Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams. © Georg Thieme Verlag KG Stuttgart · New York.

  20. Cloud Computing as Evolution of Distributed Computing – A Case Study for SlapOS Distributed Cloud Computing Platform

    Directory of Open Access Journals (Sweden)

    George SUCIU

    2013-01-01

    Full Text Available The cloud computing paradigm has been defined from several points of view, the main two directions being either as an evolution of the grid and distributed computing paradigm, or, on the contrary, as a disruptive revolution in the classical paradigms of operating systems, network layers and web applications. This paper presents a distributed cloud computing platform called SlapOS, which unifies technologies and communication protocols into a new technology model for offering any application as a service. Both cloud and distributed computing can be efficient methods for optimizing resources that are aggregated from a grid of standard PCs hosted in homes, offices and small data centers. The paper fills a gap in the existing distributed computing literature by providing a distributed cloud computing model which can be applied for deploying various applications.

  1. Development of an Instrument to Measure Health Center (HC) Personnel's Computer Use, Knowledge and Functionality Demand for HC Computerized Information System in Thailand

    OpenAIRE

    Kijsanayotin, Boonchai; Pannarunothai, Supasit; Speedie, Stuart

    2005-01-01

    Knowledge about socio-technical aspects of information technology (IT) is vital for the success of health IT projects. The Thailand health administration anticipates using health IT to support the recently implemented national universal health care system. However, the national knowledge associate with the socio-technical aspects of health IT has not been studied in Thailand. A survey instrument measuring Thai health center (HC) personnel’s computer use, basic IT knowledge a...

  2. Cloud Computing

    DEFF Research Database (Denmark)

    Krogh, Simon

    2013-01-01

    with technological changes, the paradigmatic pendulum has swung between increased centralization on one side and a focus on distributed computing that pushes IT power out to end users on the other. With the introduction of outsourcing and cloud computing, centralization in large data centers is again dominating...... the IT scene. In line with the views presented by Nicolas Carr in 2003 (Carr, 2003), it is a popular assumption that cloud computing will be the next utility (like water, electricity and gas) (Buyya, Yeo, Venugopal, Broberg, & Brandic, 2009). However, this assumption disregards the fact that most IT production......), for instance, in establishing and maintaining trust between the involved parties (Sabherwal, 1999). So far, research in cloud computing has neglected this perspective and focused entirely on aspects relating to technology, economy, security and legal questions. While the core technologies of cloud computing (e...

  3. Center for Coastline Security Technology, Year-2

    National Research Council Canada - National Science Library

    Glegg, Stewart; Glenn, William; Furht, Borko; Beaujean, P. P; Frisk, G; Schock, S; VonEllenrieder, K; Ananthakrishnan, P; An, E; Granata, R

    2007-01-01

    ...), the Imaging Technology Center, the Department of Computer Science and Engineering, and the University Consortium for Intermodal Transportation Safety and Security at Florida Atlantic University...

  4. How to Bill Your Computer Services.

    Science.gov (United States)

    Dooskin, Herbert P.

    1981-01-01

    A computer facility billing procedure should be designed so that the full costs of a computer center operation are equitably charged to the users. Design criteria, costing methods, and management's role are discussed. (Author/MLF)

  5. Outline of Toshiba Business Information Center

    Science.gov (United States)

    Nagata, Yoshihiro

    Toshiba Business Information Center gathers and stores inhouse and external business information used in common within the Toshiba Corp., and provides companywide circulation, reference and other services. The Center established centralized information management system by employing decentralized computers, electronic file apparatus (30cm laser disc) and other office automation equipments. Online retrieval through LAN is available to search the stored documents and increasing copying requests are processed by electronic file. This paper describes the purpose of establishment of the Center, the facilities, management scheme, systematization of the files and the present situation and plan of each information service.

  6. Colorado Learning Disabilities Research Center.

    Science.gov (United States)

    DeFries, J. C.; And Others

    1997-01-01

    Results obtained from the center's six research projects are reviewed, including research on psychometric assessment of twins with reading disabilities, reading and language processes, attention deficit-hyperactivity disorder and executive functions, linkage analysis and physical mapping, computer-based remediation of reading disabilities, and…

  7. Annual report of R and D activities in Center for Computational Science and e-Systems from April 1, 2009 to March 31, 2010

    International Nuclear Information System (INIS)

    2011-10-01

    This report overviews the activity of research and development (R and D) in Center for Computational Science and e-Systems (CCSE) of the Japan Atomic Energy Agency (JAEA), during the fiscal year 2009 (April 1, 2009 - March 31, 2010). The work has been accomplished by the Simulation Technology R and D Office and Computer Science R and D Office in CCSE. The activity includes researches of secure computational infrastructure for the use in atomic energy research, which is based on the grid technology, a seismic response analysis for the structure of nuclear power plants, materials science, and quantum bioinformatics. The materials science research includes large scale atomic and subatomic simulations of nuclear fuels and materials for safety assessment, large scale quantum simulations of superconductor for the design of new devices and fundamental understanding of superconductivity. The quantum bioinformatics research focuses on the development of technology for large scale atomic simulations of proteins. (author)

  8. 78 FR 39730 - Privacy Act of 1974; CMS Computer Match No. 2013-11; HHS Computer Match No. 1302

    Science.gov (United States)

    2013-07-02

    ... 1974; CMS Computer Match No. 2013-11; HHS Computer Match No. 1302 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS). ACTION: Notice of Computer Matching... notice announces the establishment of a CMP that CMS intends to conduct with State-based Administering...

  9. NASA Langley Research Center outreach in astronautical education

    Science.gov (United States)

    Duberg, J. E.

    1976-01-01

    The Langley Research Center has traditionally maintained an active relationship with the academic community, especially at the graduate level, to promote the Center's research program and to make graduate education available to its staff. Two new institutes at the Center - the Joint Institute for Acoustics and Flight Sciences, and the Institute for Computer Applications - are discussed. Both provide for research activity at the Center by university faculties. The American Society of Engineering Education Summer Faculty Fellowship Program and the NASA-NRC Postdoctoral Resident Research Associateship Program are also discussed.

  10. USERDA computer software summaries: numbers 240 through 324

    International Nuclear Information System (INIS)

    1976-12-01

    Since 1960 the Argonne Code Center has served as a U.S. Atomic Energy Commission information center for computer programs developed and used primarily for the solution of problems in nuclear physics, reactor design, reactor engineering and operation. The Center, through a network of registered installations, collects, validates, maintains, and distributes a library of these computer programs and publishes a compilation of abstracts describing them. In 1972 the scope of the Center's activities was officially expanded to include computer programs developed in all of the U.S. Atomic Energy Commission program areas and the compilation and publicatuon of this report. The Computer Software Summary report contains summaries of computer programs at the specification stage, under development, being checked out, in use, or available at ERDA offices, laboratories, and contractor installations. Programs are divided into the following categories : cross section and resonance integral calculations; spectrum calculations, generation of group constants, lattice and cell problems; static design studies; depletion, fuel management, cost analysis, and reactor economics; space-independent k;inetics; pace--time kinetics, coupled neutronics--hydrodynamics--thermodynmics and excursion simulations; radiological safety, hazard and accident analysis; heat transfer and fluid flow; deformation and stress distribution computations, structural analysis and engineering design studies; gamma heating and shielddesign programs; reactor systems analysis; data preparation; data management; subsidiary calculations; experimental data processing; general mathematical and computing system routines; materials; environmental and earth sciences; space sciences; electronics and engineering equipment; chemistry; particle accelerators and high-voltage machines; physics; controlled thermonuclear research; biology and medicine; and data

  11. Computer Training for Seniors: An Academic-Community Partnership

    Science.gov (United States)

    Sanders, Martha J.; O'Sullivan, Beth; DeBurra, Katherine; Fedner, Alesha

    2013-01-01

    Computer technology is integral to information retrieval, social communication, and social interaction. However, only 47% of seniors aged 65 and older use computers. The purpose of this study was to determine the impact of a client-centered computer program on computer skills, attitudes toward computer use, and generativity in novice senior…

  12. Air flow management in raised floor data centers

    CERN Document Server

    Arghode, Vaibhav K

    2016-01-01

    The Brief discuss primarily two aspects of air flow management in raised floor data centers. Firstly, cooling air delivery through perforated tiles will be examined and influence of the tile geometry on flow field development and hot air entrainment above perforated tiles will be discussed. Secondly, the use of cold aisle containment to physically separate hot and cold regions, and minimize hot and cold air mixing will be presented. Both experimental investigations and computational efforts are discussed and development of computational fluid dynamics (CFD) based models for simulating air flow in data centers is included. In addition, metrology tools for facility scale air velocity and temperature measurement, and air flow rate measurement through perforated floor tiles and server racks are examined and the authors present thermodynamics-based models to gauge the effectiveness and importance of air flow management schemes in data centers.

  13. Ways to increase the effectiveness of using computers and machine programs

    Energy Technology Data Exchange (ETDEWEB)

    Bulgakov, R T; Bagautdinov, G M; Kovalenko, Yu M

    1979-01-01

    An analysis is conducted of the statistical data about the operation of the computers of the computer center of the Tatar Scientific Research and Design Institute for Oil. Exposing the reasons which impact on the effectiveness of the use of the computers and the machine programs through an expert questionnaire, an ''effectiveness tree'' is compiled. Formulated are organizational measures for the executor (the computer center), the user and management and the senior leadership, which are required in order to successfully use the computers.

  14. 78 FR 50419 - Privacy Act of 1974; CMS Computer Match No. 2013-10; HHS Computer Match No. 1310

    Science.gov (United States)

    2013-08-19

    ... 1974; CMS Computer Match No. 2013-10; HHS Computer Match No. 1310 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS). ACTION: Notice of Computer Matching... notice announces the establishment of a CMP that CMS plans to conduct with the Department of Homeland...

  15. Computer applications in controlled fusion research

    International Nuclear Information System (INIS)

    Killeen, J.

    1975-02-01

    The role of Nuclear Engineering Education in the application of computers to controlled fusion research can be a very important one. In the near future the use of computers in the numerical modelling of fusion systems should increase substantially. A recent study group has identified five categories of computational models to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies are called for. The development and application of computer codes to implement these models is a vital step in reaching the goal of fusion power. In order to meet the needs of the fusion program the National CTR Computer Center has been established at the Lawrence Livermore Laboratory. A large central computing facility is linked to smaller computing centers at each of the major CTR laboratories by a communications network. The crucial element that is needed for success is trained personnel. The number of people with knowledge of plasma science and engineering that are trained in numerical methods and computer science is quite small, and must be increased substantially in the next few years. Nuclear Engineering departments should encourage students to enter this field and provide the necessary courses and research programs in fusion computing. (U.S.)

  16. Diamond NV centers for quantum computing and quantum networks

    NARCIS (Netherlands)

    Childress, L.; Hanson, R.

    2013-01-01

    The exotic features of quantum mechanics have the potential to revolutionize information technologies. Using superposition and entanglement, a quantum processor could efficiently tackle problems inaccessible to current-day computers. Nonlocal correlations may be exploited for intrinsically secure

  17. Management Needs for Computer Support.

    Science.gov (United States)

    Irby, Alice J.

    University management has many and varied needs for effective computer services in support of their processing and information functions. The challenge for the computer center managers is to better understand these needs and assist in the development of effective and timely solutions. Management needs can range from accounting and payroll to…

  18. Security and Privacy in Fog Computing: Challenges

    OpenAIRE

    Mukherjee, Mithun; Matam, Rakesh; Shu, Lei; Maglaras, Leandros; Ferrag, Mohamed Amine; Choudhry, Nikumani; Kumar, Vikas

    2017-01-01

    open access article Fog computing paradigm extends the storage, networking, and computing facilities of the cloud computing toward the edge of the networks while offloading the cloud data centers and reducing service latency to the end users. However, the characteristics of fog computing arise new security and privacy challenges. The existing security and privacy measurements for cloud computing cannot be directly applied to the fog computing due to its features, such as mobility, heteroge...

  19. Planning for the Automation of School Library Media Centers.

    Science.gov (United States)

    Caffarella, Edward P.

    1996-01-01

    Geared for school library media specialists whose centers are in the early stages of automation or conversion to a new system, this article focuses on major components of media center automation: circulation control; online public access catalogs; machine readable cataloging; retrospective conversion of print catalog cards; and computer networks…

  20. Computer-Aided Corrosion Program Management

    Science.gov (United States)

    MacDowell, Louis

    2010-01-01

    This viewgraph presentation reviews Computer-Aided Corrosion Program Management at John F. Kennedy Space Center. The contents include: 1) Corrosion at the Kennedy Space Center (KSC); 2) Requirements and Objectives; 3) Program Description, Background and History; 4) Approach and Implementation; 5) Challenges; 6) Lessons Learned; 7) Successes and Benefits; and 8) Summary and Conclusions.

  1. [Computer-aided prescribing: from utopia to reality].

    Science.gov (United States)

    Suárez-Varela Ubeda, J; Beltrán Calvo, C; Molina López, T; Navarro Marín, P

    2005-05-31

    To determine whether the introduction of computer-aided prescribing helped reduce the administrative burden at primary care centers. Descriptive, cross-sectional design. Torreblanca Health Center in the province of Seville, southern Spain. From 29 October 2003 to the present a pilot project involving nine pharmacies in the basic health zone served by this health center has been running to evaluate computer-aided prescribing (the Receta XXI project) with real patients. All patients on the center's list of patients who came to the center for an administrative consultation to renew prescriptions for medications or supplies for long-term treatment. Total number of administrative visits per patient for patients who came to the center to renew prescriptions for long-term treatment, as recorded by the Diraya system (Historia Clinica Digital del Ciudadano, or Citizen's Digital Medical Record) during the period from February to July 2004. Total number of the same type of administrative visits recorded by the previous system (TASS) during the period from February to July 2003. The mean number of administrative visits per month during the period from February to July 2003 was 160, compared to a mean number of 64 visits during the period from February to July 2004. The reduction in the number of visits for prescription renewal was 60%. Introducing a system for computer-aided prescribing significantly reduced the number of administrative visits for prescription renewal for long-term treatment. This could help reduce the administrative burden considerably in primary care if the system were used in all centers.

  2. National Energy Research Scientific Computing Center 2007 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Hules, John A.; Bashor, Jon; Wang, Ucilia; Yarris, Lynn; Preuss, Paul

    2008-10-23

    This report presents highlights of the research conducted on NERSC computers in a variety of scientific disciplines during the year 2007. It also reports on changes and upgrades to NERSC's systems and services aswell as activities of NERSC staff.

  3. Information center as a technical institute unifying a user community

    International Nuclear Information System (INIS)

    Maskewitz, B.F.; McGill, B.; Hatmaker, N.A.

    1976-01-01

    The historical background to the information analysis center concept is presented first. The Radiation Shielding Information Center (RSIC) at ORNL is cited as an example of the information analysis center. RSIC objectives and scope are described, and RSIC's role in unification of the field of shielding is discussed. Some problems in handling information exchange with respect to computer codes are examined

  4. Enhanced Survey and Proposal to secure the data in Cloud Computing Environment

    OpenAIRE

    MR.S.SUBBIAH; DR.S.SELVA MUTHUKUMARAN; DR.T.RAMKUMAR

    2013-01-01

    Cloud computing have the power to eliminate the cost of setting high end computing infrastructure. It is a promising area or design to give very flexible architecture, accessible through the internet. In the cloud computing environment the data will be reside at any of the data centers. Due to that, some data center may leak the data stored on there, beyond the reach and control of the users. For this kind of misbehaving data centers, the service providers should take care of the security and...

  5. Astigmatic single photon emission computed tomography imaging with a displaced center of rotation

    International Nuclear Information System (INIS)

    Wang, H.; Smith, M.F.; Stone, C.D.; Jaszczak, R.J.

    1998-01-01

    A filtered backprojection algorithm is developed for single photon emission computed tomography (SPECT) imaging with an astigmatic collimator having a displaced center of rotation. The astigmatic collimator has two perpendicular focal lines, one that is parallel to the axis of rotation of the gamma camera and one that is perpendicular to this axis. Using SPECT simulations of projection data from a hot rod phantom and point source arrays, it is found that a lack of incorporation of the mechanical shift in the reconstruction algorithm causes errors and artifacts in reconstructed SPECT images. The collimator and acquisition parameters in the astigmatic reconstruction formula, which include focal lengths, radius of rotation, and mechanical shifts, are often partly unknown and can be determined using the projections of a point source at various projection angles. The accurate determination of these parameters by a least squares fitting technique using projection data from numerically simulated SPECT acquisitions is studied. These studies show that the accuracy of parameter determination is improved as the distance between the point source and the axis of rotation of the gamma camera is increased. The focal length to the focal line perpendicular to the axis of rotation is determined more accurately than the focal length to the focal line parallel to this axis. copyright 1998 American Association of Physicists in Medicine

  6. The BaBar experiment's distributed computing model

    International Nuclear Information System (INIS)

    Boutigny, D.

    2001-01-01

    In order to face the expected increase in statistics between now and 2005, the BaBar experiment at SLAC is evolving its computing model toward a distributed multitier system. It is foreseen that data will be spread among Tier-A centers and deleted from the SLAC center. A uniform computing environment is being deployed in the centers, the network bandwidth is continuously increased and data distribution tools has been designed in order to reach a transfer rate of ∼100 TB of data per year. In parallel, smaller Tier-B and C sites receive subsets of data, presently in Kanga-ROOT format and later in Objectivity format. GRID tools will be used for remote job submission

  7. The BaBar Experiment's Distributed Computing Model

    International Nuclear Information System (INIS)

    Gowdy, Stephen J.

    2002-01-01

    In order to face the expected increase in statistics between now and 2005, the BaBar experiment at SLAC is evolving its computing model toward a distributed multi-tier system. It is foreseen that data will be spread among Tier-A centers and deleted from the SLAC center. A uniform computing environment is being deployed in the centers, the network bandwidth is continuously increased and data distribution tools has been designed in order to reach a transfer rate of ∼100 TB of data per year. In parallel, smaller Tier-B and C sites receive subsets of data, presently in Kanga-ROOT[1] format and later in Objectivity[2] format. GRID tools will be used for remote job submission

  8. Comparative Analysis of Canal Centering Ability of Different Single File Systems Using Cone Beam Computed Tomography- An In-Vitro Study.

    Science.gov (United States)

    Agarwal, Rolly S; Agarwal, Jatin; Jain, Pradeep; Chandra, Anil

    2015-05-01

    The ability of an endodontic instrument to remain centered in the root canal system is one of the most important characteristic influencing the clinical performance of a particular file system. Thus, it is important to assess the canal centering ability of newly introduced single file systems before they can be considered a viable replacement of full-sequence rotary file systems. The aim of the study was to compare the canal transportation, centering ability, and time taken for preparation of curved root canals after instrumentation with single file systems One Shape and Wave One, using cone-beam computed tomography (CBCT). Sixty mesiobuccal canals of mandibular molars with an angle of curvature ranging from 20(o) to 35(o) were divided into three groups of 20 samples each: ProTaper PT (group I) - full-sequence rotary control group, OneShape OS (group II)- single file continuous rotation, WaveOne WO - single file reciprocal motion (group III). Pre instrumentation and post instrumentation three-dimensional CBCT images were obtained from root cross-sections at 3mm, 6mm and 9mm from the apex. Scanned images were then accessed to determine canal transportation and centering ability. The data collected were evaluated using one-way analysis of variance (ANOVA) with Tukey's honestly significant difference test. It was observed that there were no differences in the magnitude of transportation between the rotary instruments (p >0.05) at both 3mm as well as 6mm from the apex. At 9 mm from the apex, Group I PT showed significantly higher mean canal transportation and lower centering ability (0.19±0.08 and 0.39±0.16), as compared to Group II OS (0.12±0.07 and 0.54±0.24) and Group III WO (0.13±0.06 and 0.55±0.18) while the differences between OS and WO were not statistically significant. It was concluded that there was minor difference between the tested groups. Single file systems demonstrated average canal transportation and centering ability comparable to full sequence

  9. Comparative Analysis of Canal Centering Ability of Different Single File Systems Using Cone Beam Computed Tomography- An In-Vitro Study

    Science.gov (United States)

    Agarwal, Jatin; Jain, Pradeep; Chandra, Anil

    2015-01-01

    Background The ability of an endodontic instrument to remain centered in the root canal system is one of the most important characteristic influencing the clinical performance of a particular file system. Thus, it is important to assess the canal centering ability of newly introduced single file systems before they can be considered a viable replacement of full-sequence rotary file systems. Aim The aim of the study was to compare the canal transportation, centering ability, and time taken for preparation of curved root canals after instrumentation with single file systems One Shape and Wave One, using cone-beam computed tomography (CBCT). Materials and Methods Sixty mesiobuccal canals of mandibular molars with an angle of curvature ranging from 20o to 35o were divided into three groups of 20 samples each: ProTaper PT (group I) – full-sequence rotary control group, OneShape OS (group II)- single file continuous rotation, WaveOne WO – single file reciprocal motion (group III). Pre instrumentation and post instrumentation three-dimensional CBCT images were obtained from root cross-sections at 3mm, 6mm and 9mm from the apex. Scanned images were then accessed to determine canal transportation and centering ability. The data collected were evaluated using one-way analysis of variance (ANOVA) with Tukey’s honestly significant difference test. Results It was observed that there were no differences in the magnitude of transportation between the rotary instruments (p >0.05) at both 3mm as well as 6mm from the apex. At 9 mm from the apex, Group I PT showed significantly higher mean canal transportation and lower centering ability (0.19±0.08 and 0.39±0.16), as compared to Group II OS (0.12±0.07 and 0.54±0.24) and Group III WO (0.13±0.06 and 0.55±0.18) while the differences between OS and WO were not statistically significant Conclusion It was concluded that there was minor difference between the tested groups. Single file systems demonstrated average canal

  10. ATLAS computing activities and developments in the Italian Grid cloud

    International Nuclear Information System (INIS)

    Rinaldi, L; Ciocca, C; K, M; Annovi, A; Antonelli, M; Martini, A; Barberis, D; Brunengo, A; Corosu, M; Barberis, S; Carminati, L; Campana, S; Di, A; Capone, V; Carlino, G; Doria, A; Esposito, R; Merola, L; De, A; Luminari, L

    2012-01-01

    The large amount of data produced by the ATLAS experiment needs new computing paradigms for data processing and analysis, which involve many computing centres spread around the world. The computing workload is managed by regional federations, called “clouds”. The Italian cloud consists of a main (Tier-1) center, located in Bologna, four secondary (Tier-2) centers, and a few smaller (Tier-3) sites. In this contribution we describe the Italian cloud facilities and the activities of data processing, analysis, simulation and software development performed within the cloud, and we discuss the tests of the new computing technologies contributing to evolution of the ATLAS Computing Model.

  11. Canal transportation and centering ability of protaper and self-adjusting file system in long oval canals: An ex-vivo cone-beam computed tomography analysis.

    Science.gov (United States)

    Shah, Dipali Yogesh; Wadekar, Swati Ishwara; Dadpe, Ashwini Manish; Jadhav, Ganesh Ranganath; Choudhary, Lalit Jayant; Kalra, Dheeraj Deepak

    2017-01-01

    The purpose of this study was to compare and evaluate the shaping ability of ProTaper (PT) and Self-Adjusting File (SAF) system using cone-beam computed tomography (CBCT) to assess their performance in oval-shaped root canals. Sixty-two mandibular premolars with single oval canals were divided into two experimental groups ( n = 31) according to the systems used: Group I - PT and Group II - SAF. Canals were evaluated before and after instrumentation using CBCT to assess centering ratio and canal transportation at three levels. Data were statistically analyzed using one-way analysis of variance, post hoc Tukey's test, and t -test. The SAF showed better centering ability and lesser canal transportation than the PT only in the buccolingual plane at 6 and 9 mm levels. The shaping ability of the PT was best in the apical third in both the planes. The SAF had statistically significant better centering and lesser canal transportation in the buccolingual as compared to the mesiodistal plane at the middle and coronal levels. The SAF produced significantly less transportation and remained centered than the PT at the middle and coronal levels in the buccolingual plane of oval canals. In the mesiodistal plane, the performance of both the systems was parallel.

  12. Comparison of canal transportation and centering ability of twisted files, Pathfile-ProTaper system, and stainless steel hand K-files by using computed tomography.

    Science.gov (United States)

    Gergi, Richard; Rjeily, Joe Abou; Sader, Joseph; Naaman, Alfred

    2010-05-01

    The purpose of this study was to compare canal transportation and centering ability of 2 rotary nickel-titanium (NiTi) systems (Twisted Files [TF] and Pathfile-ProTaper [PP]) with conventional stainless steel K-files. Ninety root canals with severe curvature and short radius were selected. Canals were divided randomly into 3 groups of 30 each. After preparation with TF, PP, and stainless steel files, the amount of transportation that occurred was assessed by using computed tomography. Three sections from apical, mid-root, and coronal levels of the canal were recorded. Amount of transportation and centering ability were assessed. The 3 groups were statistically compared with analysis of variance and Tukey honestly significant difference test. Less transportation and better centering ability occurred with TF rotary instruments (P < .0001). K-files showed the highest transportation followed by PP system. PP system showed significant transportation when compared with TF (P < .0001). The TF system was found to be the best for all variables measured in this study. Copyright (c) 2010 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  13. Exploring the Relationships between Self-Efficacy and Preference for Teacher Authority among Computer Science Majors

    Science.gov (United States)

    Lin, Che-Li; Liang, Jyh-Chong; Su, Yi-Ching; Tsai, Chin-Chung

    2013-01-01

    Teacher-centered instruction has been widely adopted in college computer science classrooms and has some benefits in training computer science undergraduates. Meanwhile, student-centered contexts have been advocated to promote computer science education. How computer science learners respond to or prefer the two types of teacher authority,…

  14. Performance of Cloud Computing Centers with Multiple Priority Classes

    NARCIS (Netherlands)

    Ellens, W.; Zivkovic, Miroslav; Akkerboom, J.; Litjens, R.; van den Berg, Hans Leo

    In this paper we consider the general problem of resource provisioning within cloud computing. We analyze the problem of how to allocate resources to different clients such that the service level agreements (SLAs) for all of these clients are met. A model with multiple service request classes

  15. Whatever works: a systematic user-centered training protocol to optimize brain-computer interfacing individually.

    Directory of Open Access Journals (Sweden)

    Elisabeth V C Friedrich

    Full Text Available This study implemented a systematic user-centered training protocol for a 4-class brain-computer interface (BCI. The goal was to optimize the BCI individually in order to achieve high performance within few sessions for all users. Eight able-bodied volunteers, who were initially naïve to the use of a BCI, participated in 10 sessions over a period of about 5 weeks. In an initial screening session, users were asked to perform the following seven mental tasks while multi-channel EEG was recorded: mental rotation, word association, auditory imagery, mental subtraction, spatial navigation, motor imagery of the left hand and motor imagery of both feet. Out of these seven mental tasks, the best 4-class combination as well as most reactive frequency band (between 8-30 Hz was selected individually for online control. Classification was based on common spatial patterns and Fisher's linear discriminant analysis. The number and time of classifier updates varied individually. Selection speed was increased by reducing trial length. To minimize differences in brain activity between sessions with and without feedback, sham feedback was provided in the screening and calibration runs in which usually no real-time feedback is shown. Selected task combinations and frequency ranges differed between users. The tasks that were included in the 4-class combination most often were (1 motor imagery of the left hand (2, one brain-teaser task (word association or mental subtraction (3, mental rotation task and (4 one more dynamic imagery task (auditory imagery, spatial navigation, imagery of the feet. Participants achieved mean performances over sessions of 44-84% and peak performances in single-sessions of 58-93% in this user-centered 4-class BCI protocol. This protocol is highly adjustable to individual users and thus could increase the percentage of users who can gain and maintain BCI control. A high priority for future work is to examine this protocol with severely

  16. Cloud Computing Security

    OpenAIRE

    Ngongang, Guy

    2011-01-01

    This project aimed to show how possible it is to use a network intrusion detection system in the cloud. The security in the cloud is a concern nowadays and security professionals are still finding means to make cloud computing more secure. First of all the installation of the ESX4.0, vCenter Server and vCenter lab manager in server hardware was successful in building the platform. This allowed the creation and deployment of many virtual servers. Those servers have operating systems and a...

  17. Dimensioning storage and computing clusters for efficient high throughput computing

    International Nuclear Information System (INIS)

    Accion, E; Bria, A; Bernabeu, G; Caubet, M; Delfino, M; Espinal, X; Merino, G; Lopez, F; Martinez, F; Planas, E

    2012-01-01

    Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.

  18. SciDAC Visualization and Analytics Center for Enabling Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Joy, Kenneth I. [Univ. of California, Davis, CA (United States)

    2014-09-14

    This project focuses on leveraging scientific visualization and analytics software technology as an enabling technology for increasing scientific productivity and insight. Advances in computational technology have resulted in an "information big bang," which in turn has created a significant data understanding challenge. This challenge is widely acknowledged to be one of the primary bottlenecks in contemporary science. The vision for our Center is to respond directly to that challenge by adapting, extending, creating when necessary and deploying visualization and data understanding technologies for our science stakeholders. Using an organizational model as a Visualization and Analytics Center for Enabling Technologies (VACET), we are well positioned to be responsive to the needs of a diverse set of scientific stakeholders in a coordinated fashion using a range of visualization, mathematics, statistics, computer and computational science and data management technologies.

  19. The 20 Tera flop Erasmus Computing Grid (ECG).

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing

  20. The 20 Tera flop Erasmus Computing Grid (ECG)

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2009-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing

  1. Computed tomography-guided percutaneous gastrostomy: initial experience at a cancer center

    International Nuclear Information System (INIS)

    Tyng, Chiang Jeng; Santos, Erich Frank Vater; Guerra, Luiz Felipe Alves; Bitencourt, Almir Galvao Vieira; Barbosa, Paula Nicole Vieira Pinto; Chojniak, Rubens; Universidade Federal do Espirito Santo

    2017-01-01

    Gastrostomy is indicated for patients with conditions that do not allow adequate oral nutrition. To reduce the morbidity and costs associated with the procedure, there is a trend toward the use of percutaneous gastrostomy, guided by endoscopy, fluoroscopy, or, most recently, computed tomography. The purpose of this paper was to review the computed tomography-guided gastrostomy procedure, as well as the indications for its use and the potential complications. (author)

  2. Computed tomography-guided percutaneous gastrostomy: initial experience at a cancer center

    Energy Technology Data Exchange (ETDEWEB)

    Tyng, Chiang Jeng; Santos, Erich Frank Vater; Guerra, Luiz Felipe Alves; Bitencourt, Almir Galvao Vieira; Barbosa, Paula Nicole Vieira Pinto; Chojniak, Rubens [A. C. Camargo Cancer Center, Sao Paulo, SP (Brazil); Universidade Federal do Espirito Santo (HUCAM/UFES), Vitoria, ES (Brazil). Hospital Universitario Cassiano Antonio de Morais. Radiologia e Diagnostico por Imagem

    2017-03-15

    Gastrostomy is indicated for patients with conditions that do not allow adequate oral nutrition. To reduce the morbidity and costs associated with the procedure, there is a trend toward the use of percutaneous gastrostomy, guided by endoscopy, fluoroscopy, or, most recently, computed tomography. The purpose of this paper was to review the computed tomography-guided gastrostomy procedure, as well as the indications for its use and the potential complications. (author)

  3. NASA Space Engineering Research Center for VLSI systems design

    Science.gov (United States)

    1991-01-01

    This annual review reports the center's activities and findings on very large scale integration (VLSI) systems design for 1990, including project status, financial support, publications, the NASA Space Engineering Research Center (SERC) Symposium on VLSI Design, research results, and outreach programs. Processor chips completed or under development are listed. Research results summarized include a design technique to harden complementary metal oxide semiconductors (CMOS) memory circuits against single event upset (SEU); improved circuit design procedures; and advances in computer aided design (CAD), communications, computer architectures, and reliability design. Also described is a high school teacher program that exposes teachers to the fundamentals of digital logic design.

  4. [The QuIK-Registry of the German Society of Cardiologists in private practice: countrywide and benchmarking quality assurance in invasive cardiology].

    Science.gov (United States)

    Albrecht, A; Levenson, B; Göhring, S; Haerer, W; Reifart, N; Ringwald, G; Troger, B

    2009-10-01

    QuIK is the German acronym for QUality Assurance in Invasive Cardiology. It describes the continuous project of an electronic data collection in Cardiac catheterization laboratories all over Germany. Mainly members of the German Society of Cardiologists in Private Practice (BNK) participate in this computer based project. Since 1996 data of diagnostic and interventional procedures are collected and send to a registry-center where a regular benchmarking analysis of the results is performed. Part of the project is a yearly auditing process including an on-site visit to the cath lab to guarantee for the reliability of information collected. Since 1996 about one million procedures have been documented. Georg Thieme Verlag KG Stuttgart , New York.

  5. High speed switching for computer and communication networks

    NARCIS (Netherlands)

    Dorren, H.J.S.

    2014-01-01

    The role of data centers and computers are vital for the future of our data-centric society. Historically the performance of data-centers is increasing with a factor 100-1000 every ten years and as a result of this the capacity of the data-center communication network has to scale accordingly. This

  6. Computed tomography system

    International Nuclear Information System (INIS)

    Lambert, T.W.; Blake, J.E.

    1981-01-01

    This invention relates to computed tomography and is particularly concerned with determining the CT numbers of zones of interest in an image displayed on a cathode ray tube which zones lie in the so-called level or center of the gray scale window. (author)

  7. 69: Computers in radiotherapy - The Philippine perspective

    International Nuclear Information System (INIS)

    Rodriguez, L.V.; Sy Ortin, T.T.

    1987-01-01

    Malignant neoplasm ranks third among the killer diseases in the Philippines today. For the past five years, around 26,000 cases per year have been reported. In 1986, 27% of the total number of cases reported received radiation therapy. Individual treatment plans were made for 17% of these patients. A survey was conducted among the twelve radiation treatment centers in the country. Six of these centers are hoping to have treatment planning computers in the future. Financial constraints inhibit the acquisition of computers for radiotherapy use. At present, the authors have designed simple programs for use at the Cancer Control Center. Further development of treatment planning software that would meet the present needs of the local condition is being explored. 2 refs.; 2 figs.; 2 tabs

  8. [The conflict between work and private life and its relationship with burnout - results of a physician survey in breast cancer centers in North Rhine-Westphalia].

    Science.gov (United States)

    Nitzsche, A; Driller, E; Kowalski, C; Ansmann, L; Pfaff, H

    2013-05-01

    This study investigates the conflict between work and private life (work-life conflict and life-work conflict) and its relationship with burnout among physicians in breast cancer centers in North Rhine-Westphalia (n=378). With regard to the construct burnout, we differentiated between the 3 subscales emotional exhaustion, depersonalisation and personal accomplishment of the Maslach burnout inventory. In a structural equation model it was seen that above all the work-life conflict is positively associated with emotional exhaustion whereas the life-work conflict has a stronger positive correlation with depersonalisation and a negative relationship with personal accomplishment. Altogether, the results emphasise the importance of a successful interaction between professional work and private life ("work-life balance") for the health of medical personnel. © Georg Thieme Verlag KG Stuttgart · New York.

  9. PRIMARY SCHOOL PRINCIPALS’ ATTITUDES TOWARDS COMPUTER TECHNOLOGY IN THE USE OF COMPUTER TECHNOLOGY IN SCHOOL ADMINISTRATION

    OpenAIRE

    GÜNBAYI, İlhan; CANTÜRK, Gökhan

    2011-01-01

    The aim of the study is to determine the usage of computer technology in school administration, primary school administrators’ attitudes towards computer technology, administrators’ and teachers’ computer literacy level. The study was modeled as a survey search. The population of the study consists primary school principals, assistant principals in public primary schools in the center of Antalya. The data were collected from 161 (%51) administrator questionnaires in 68 of 129 public primary s...

  10. CILT2000: Ubiquitous Computing--Spanning the Digital Divide.

    Science.gov (United States)

    Tinker, Robert; Vahey, Philip

    2002-01-01

    Discusses the role of ubiquitous and handheld computers in education. Summarizes the contributions of the Center for Innovative Learning Technologies (CILT) and describes the ubiquitous computing sessions at the CILT2000 Conference. (Author/YDS)

  11. An accelerated line-by-line option for MODTRAN combining on-the-fly generation of line center absorption within 0.1 cm-1 bins and pre-computed line tails

    Science.gov (United States)

    Berk, Alexander; Conforti, Patrick; Hawes, Fred

    2015-05-01

    A Line-By-Line (LBL) option is being developed for MODTRAN6. The motivation for this development is two-fold. Firstly, when MODTRAN is validated against an independent LBL model, it is difficult to isolate the source of discrepancies. One must verify consistency between pressure, temperature and density profiles, between column density calculations, between continuum and particulate data, between spectral convolution methods, and more. Introducing a LBL option directly within MODTRAN will insure common elements for all calculations other than those used to compute molecular transmittances. The second motivation for the LBL upgrade is that it will enable users to compute high spectral resolution transmittances and radiances for the full range of current MODTRAN applications. In particular, introducing the LBL feature into MODTRAN will enable first-principle calculations of scattered radiances, an option that is often not readily available with LBL models. MODTRAN will compute LBL transmittances within one 0.1 cm-1 spectral bin at a time, marching through the full requested band pass. The LBL algorithm will use the highly accurate, pressure- and temperature-dependent MODTRAN Padé approximant fits of the contribution from line tails to define the absorption from all molecular transitions centered more than 0.05 cm-1 from each 0.1 cm-1 spectral bin. The beauty of this approach is that the on-the-fly computations for each 0.1 cm-1 bin will only require explicit LBL summing of transitions centered within a 0.2 cm-1 spectral region. That is, the contribution from the more distant lines will be pre-computed via the Padé approximants. The status of the LBL effort will be presented. This will include initial thermal and solar radiance calculations, validation calculations, and self-validations of the MODTRAN band model against its own LBL calculations.

  12. Strategies in Interventional Radiology: Formation of an Interdisciplinary Center of Vascular Anomalies - Chances and Challenges for Effective and Efficient Patient Management.

    Science.gov (United States)

    Sadick, Maliha; Dally, Franz Josef; Schönberg, Stefan O; Stroszczynski, Christian; Wohlgemuth, Walter A

    2017-10-01

    Background  Radiology is an interdisciplinary field dedicated to the diagnosis and treatment of numerous diseases and is involved in the development of multimodal treatment concepts. Method  Interdisciplinary case management, a broad spectrum of diagnostic imaging facilities and dedicated endovascular radiological treatment options are valuable tools that allow radiology to set up an interdisciplinary center for vascular anomalies. Results  Image-based diagnosis combined with endovascular treatment options is an essential tool for the treatment of patients with highly complex vascular diseases. These vascular anomalies can affect numerous parts of the body so that a multidisciplinary treatment approach is required for optimal patient care. Conclusion  This paper discusses the possibilities and challenges regarding effective and efficient patient management in connection with the formation of an interdisciplinary center for vascular anomalies with strengthening of the clinical role of radiologists. Key points   · Vascular anomalies, which include vascular tumors and malformations, are complex to diagnose and treat.. · There are far more patients with vascular anomalies requiring therapy than interdisciplinary centers for vascular anomalies - there is currently a shortage of dedicated interdisciplinary centers for vascular anomalies in Germany that can provide dedicated care for affected patients.. · Radiology includes a broad spectrum of diagnostic and minimally invasive therapeutic tools which allow the formation of an interdisciplinary center for vascular anomalies for effective, efficient and comprehensive patient management.. Citation Format · Sadick M, Dally FJ, Schönberg SO et al. Strategies in Interventional Radiology: Formation of an Interdisciplinary Center of Vascular Anomalies - Chances and Challenges for Effective and Efficient Patient Management. Fortschr Röntgenstr 2017; 189: 957 - 966. © Georg Thieme Verlag KG Stuttgart · New

  13. Security in cloud computing

    OpenAIRE

    Moreno Martín, Oriol

    2016-01-01

    Security in Cloud Computing is becoming a challenge for next generation Data Centers. This project will focus on investigating new security strategies for Cloud Computing systems. Cloud Computingisarecent paradigmto deliver services over Internet. Businesses grow drastically because of it. Researchers focus their work on it. The rapid access to exible and low cost IT resources on an on-demand fashion, allows the users to avoid planning ahead for provisioning, and enterprises to save money ...

  14. Computer Vision Syndrome and Associated Factors Among Medical ...

    African Journals Online (AJOL)

    among college students the effects of computer use on the eye and vision related problems. ... which included the basic demographic profile, hours of computer use per ..... Male was reported by Costa et al., among call center workers in. Brazil.[17]. Headache .... the use of computer had become universal in higher education.

  15. Dynamic integration of remote cloud resources into local computing clusters

    Energy Technology Data Exchange (ETDEWEB)

    Fleig, Georg; Erli, Guenther; Giffels, Manuel; Hauth, Thomas; Quast, Guenter; Schnepf, Matthias [Institut fuer Experimentelle Kernphysik, Karlsruher Institut fuer Technologie (Germany)

    2016-07-01

    In modern high-energy physics (HEP) experiments enormous amounts of data are analyzed and simulated. Traditionally dedicated HEP computing centers are built or extended to meet this steadily increasing demand for computing resources. Nowadays it is more reasonable and more flexible to utilize computing power at remote data centers providing regular cloud services to users as they can be operated in a more efficient manner. This approach uses virtualization and allows the HEP community to run virtual machines containing a dedicated operating system and transparent access to the required software stack on almost any cloud site. The dynamic management of virtual machines depending on the demand for computing power is essential for cost efficient operation and sharing of resources with other communities. For this purpose the EKP developed the on-demand cloud manager ROCED for dynamic instantiation and integration of virtualized worker nodes into the institute's computing cluster. This contribution will report on the concept of our cloud manager and the implementation utilizing a remote OpenStack cloud site and a shared HPC center (bwForCluster located in Freiburg).

  16. CY15 Livermore Computing Focus Areas

    Energy Technology Data Exchange (ETDEWEB)

    Connell, Tom M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Cupps, Kim C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); D' Hooge, Trent E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fahey, Tim J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fox, Dave M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Futral, Scott W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gary, Mark R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Goldstone, Robin J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hamilton, Pam G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Heer, Todd M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Long, Jeff W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mark, Rich J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Morrone, Chris J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shoopman, Jerry D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Slavec, Joe A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Smith, David W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Springmeyer, Becky R [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Stearman, Marc D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Watson, Py C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-01-20

    The LC team undertook a survey of primary Center drivers for CY15. Identified key drivers included enhancing user experience and productivity, pre-exascale platform preparation, process improvement, data-centric computing paradigms and business expansion. The team organized critical supporting efforts into three cross-cutting focus areas; Improving Service Quality; Monitoring, Automation, Delegation and Center Efficiency; and Next Generation Compute and Data Environments In each area the team detailed high level challenges and identified discrete actions to address these issues during the calendar year. Identifying the Center’s primary drivers, issues, and plans is intended to serve as a lens focusing LC personnel, resources, and priorities throughout the year.

  17. Center for Advanced Energy Studies: Computer Assisted Virtual Environment (CAVE)

    Data.gov (United States)

    Federal Laboratory Consortium — The laboratory contains a four-walled 3D computer assisted virtual environment - or CAVE TM — that allows scientists and engineers to literally walk into their data...

  18. Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2017

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard

    2014-05-02

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In March 2013, NERSC, DOE?s Office of Advanced Scientific Computing Research (ASCR) and DOE?s Office of Fusion Energy Sciences (FES) held a review to characterize High Performance Computing (HPC) and storage requirements for FES research through 2017. This report is the result.

  19. Computational Nanotechnology Molecular Electronics, Materials and Machines

    Science.gov (United States)

    Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    This presentation covers research being performed on computational nanotechnology, carbon nanotubes and fullerenes at the NASA Ames Research Center. Topics cover include: nanomechanics of nanomaterials, nanotubes and composite materials, molecular electronics with nanotube junctions, kinky chemistry, and nanotechnology for solid-state quantum computers using fullerenes.

  20. Evidence of seasonal variation in longitudinal growth of height in a sample of boys from Stuttgart Carlsschule, 1771-1793, using combined principal component analysis and maximum likelihood principle.

    Science.gov (United States)

    Lehmann, A; Scheffler, Ch; Hermanussen, M

    2010-02-01

    Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.

  1. Computer-Aided Diagnosis of Breast Cancer: A Multi-Center Demonstrator

    National Research Council Canada - National Science Library

    Floyd, Carey

    2000-01-01

    .... The focus has been to gather data from multiple sites in order to verify and whether the artificial neural network computer aid to the diagnosis of breast cancer can be translated between locations...

  2. Computer Science Research at Langley

    Science.gov (United States)

    Voigt, S. J. (Editor)

    1982-01-01

    A workshop was held at Langley Research Center, November 2-5, 1981, to highlight ongoing computer science research at Langley and to identify additional areas of research based upon the computer user requirements. A panel discussion was held in each of nine application areas, and these are summarized in the proceedings. Slides presented by the invited speakers are also included. A survey of scientific, business, data reduction, and microprocessor computer users helped identify areas of focus for the workshop. Several areas of computer science which are of most concern to the Langley computer users were identified during the workshop discussions. These include graphics, distributed processing, programmer support systems and tools, database management, and numerical methods.

  3. Computational Pathology

    Science.gov (United States)

    Louis, David N.; Feldman, Michael; Carter, Alexis B.; Dighe, Anand S.; Pfeifer, John D.; Bry, Lynn; Almeida, Jonas S.; Saltz, Joel; Braun, Jonathan; Tomaszewski, John E.; Gilbertson, John R.; Sinard, John H.; Gerber, Georg K.; Galli, Stephen J.; Golden, Jeffrey A.; Becich, Michael J.

    2016-01-01

    Context We define the scope and needs within the new discipline of computational pathology, a discipline critical to the future of both the practice of pathology and, more broadly, medical practice in general. Objective To define the scope and needs of computational pathology. Data Sources A meeting was convened in Boston, Massachusetts, in July 2014 prior to the annual Association of Pathology Chairs meeting, and it was attended by a variety of pathologists, including individuals highly invested in pathology informatics as well as chairs of pathology departments. Conclusions The meeting made recommendations to promote computational pathology, including clearly defining the field and articulating its value propositions; asserting that the value propositions for health care systems must include means to incorporate robust computational approaches to implement data-driven methods that aid in guiding individual and population health care; leveraging computational pathology as a center for data interpretation in modern health care systems; stating that realizing the value proposition will require working with institutional administrations, other departments, and pathology colleagues; declaring that a robust pipeline should be fostered that trains and develops future computational pathologists, for those with both pathology and non-pathology backgrounds; and deciding that computational pathology should serve as a hub for data-related research in health care systems. The dissemination of these recommendations to pathology and bioinformatics departments should help facilitate the development of computational pathology. PMID:26098131

  4. Opportunities for Combined Heat and Power in Data Centers

    Energy Technology Data Exchange (ETDEWEB)

    Darrow, Ken [ICF International; Hedman, Bruce [ICF International

    2009-03-01

    Data centers represent a rapidly growing and very energy intensive activity in commercial, educational, and government facilities. In the last five years the growth of this sector was the electric power equivalent to seven new coal-fired power plants. Data centers consume 1.5% of the total power in the U.S. Growth over the next five to ten years is expected to require a similar increase in power generation. This energy consumption is concentrated in buildings that are 10-40 times more energy intensive than a typical office building. The sheer size of the market, the concentrated energy consumption per facility, and the tendency of facilities to cluster in 'high-tech' centers all contribute to a potential power infrastructure crisis for the industry. Meeting the energy needs of data centers is a moving target. Computing power is advancing rapidly, which reduces the energy requirements for data centers. A lot of work is going into improving the computing power of servers and other processing equipment. However, this increase in computing power is increasing the power densities of this equipment. While fewer pieces of equipment may be needed to meet a given data processing load, the energy density of a facility designed to house this higher efficiency equipment will be as high as or higher than it is today. In other words, while the data center of the future may have the IT power of ten data centers of today, it is also going to have higher power requirements and higher power densities. This report analyzes the opportunities for CHP technologies to assist primary power in making the data center more cost-effective and energy efficient. Broader application of CHP will lower the demand for electricity from central stations and reduce the pressure on electric transmission and distribution infrastructure. This report is organized into the following sections: (1) Data Center Market Segmentation--the description of the overall size of the market, the size and

  5. Trip attraction rates of shopping centers in Northern New Castle County, Delaware.

    Science.gov (United States)

    2004-07-01

    This report presents the trip attraction rates of the shopping centers in Northern New : Castle County in Delaware. The study aims to provide an alternative to ITE Trip : Generation Manual (1997) for computing the trip attraction of shopping centers ...

  6. Exploitation of heterogeneous resources for ATLAS Computing

    CERN Document Server

    Chudoba, Jiri; The ATLAS collaboration

    2018-01-01

    LHC experiments require significant computational resources for Monte Carlo simulations and real data processing and the ATLAS experiment is not an exception. In 2017, ATLAS exploited steadily almost 3M HS06 units, which corresponds to about 300 000 standard CPU cores. The total disk and tape capacity managed by the Rucio data management system exceeded 350 PB. Resources are provided mostly by Grid computing centers distributed in geographically separated locations and connected by the Grid middleware. The ATLAS collaboration developed several systems to manage computational jobs, data files and network transfers. ATLAS solutions for job and data management (PanDA and Rucio) were generalized and now are used also by other collaborations. More components are needed to include new resources such as private and public clouds, volunteers' desktop computers and primarily supercomputers in major HPC centers. Workflows and data flows significantly differ for these less traditional resources and extensive software re...

  7. Software Accelerates Computing Time for Complex Math

    Science.gov (United States)

    2014-01-01

    Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.

  8. National Nuclear Data Center status report

    International Nuclear Information System (INIS)

    2002-01-01

    This paper is the status report of the US National Nuclear Data Center, Brookhaven. It describes the new NDS approach to customer services, which is based on users initiating wish lists on topics of interest with the possibility to receive reports in hardcopy or electronically forms. After completion within the next two years of the multi platform software for management and data retrievals from shared databases, users will have the opportunity to install directly their own local nuclear data center for desktop applications. The paper describes the computer facilities, the nuclear reaction data structure, the database migration and the customer services. (a.n.)

  9. Scientific activities 1980 Nuclear Research Center ''Democritos''

    International Nuclear Information System (INIS)

    1982-01-01

    The scientific activities and achievements of the Nuclear Research Center Democritos for the year 1980 are presented in the form of a list of 76 projects giving title, objectives, responsible of each project, developed activities and the pertaining lists of publications. The 16 chapters of this work cover the activities of the main Divisions of the Democritos NRC: Electronics, Biology, Physics, Chemistry, Health Physics, Reactor, Scientific Directorate, Radioisotopes, Environmental Radioactivity, Soil Science, Computer Center, Uranium Exploration, Medical Service, Technological Applications, Radioimmunoassay and Training. (N.C.)

  10. Framework Resources Multiply Computing Power

    Science.gov (United States)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  11. Computed Tomographic Analysis of Ventral Atlantoaxial Optimal Safe Implantation Corridors in 27 Dogs.

    Science.gov (United States)

    Leblond, Guillaume; Gaitero, Luis; Moens, Noel M M; Zur Linden, Alex; James, Fiona M K; Monteith, Gabrielle J; Runciman, John

    2017-11-01

    Objectives  Ventral atlantoaxial stabilization techniques are challenging surgical procedures in dogs. Available surgical guidelines are based upon subjective anatomical landmarks, and limited radiographic and computed tomographic data. The aims of this study were (1) to provide detailed anatomical descriptions of atlantoaxial optimal safe implantation corridors to generate objective recommendations for optimal implant placements and (2) to compare anatomical data obtained in non-affected Toy breed dogs, affected Toy breed dogs suffering from atlantoaxial instability and non-affected Beagle dogs. Methods  Anatomical data were collected from a prospectively recruited population of 27 dogs using a previously validated method of optimal safe implantation corridor analysis using computed tomographic images. Results  Optimal implant positions and three-dimensional numerical data were generated successfully in all cases. Anatomical landmarks could be used to generate objective definitions of optimal insertion points which were applicable across all three groups. Overall the geometrical distribution of all implant sites was similar in all three groups with a few exceptions. Clinical Significance  This study provides extensive anatomical data available to facilitate surgical planning of implant placement for atlantoaxial stabilization. Our data suggest that non-affected Toy breed dogs and non-affected Beagle dogs constitute reasonable research models to study atlantoaxial stabilization constructs. Schattauer GmbH Stuttgart.

  12. SciDAC visualization and analytics center for enabling technology

    International Nuclear Information System (INIS)

    Bethel, E Wes; Johnson, Chris; Joy, Ken; Ahern, Sean; Pascucci, Valerio; Childs, Hank; Cohen, Jonathan; Duchaineau, Mark; Hamann, Bernd; Hansen, Charles; Laney, Dan; Lindstrom, Peter; Meredith, Jeremy; Ostrouchov, George; Parker, Steven; Silva, Claudio; Sanderson, Allen; Tricoche, Xavier

    2007-01-01

    The Visualization and Analytics Center for Enabling Technologies (VACET) focuses on leveraging scientific visualization and analytics software technology as an enabling technology for increasing scientific productivity and insight. Advances in computational technology have resulted in an 'information big bang,' which in turn has created a significant data understanding challenge. This challenge is widely acknowledged to be one of the primary bottlenecks in contemporary science. The vision of VACET is to adapt, extend, create when necessary, and deploy visual data analysis solutions that are responsive to the needs of DOE's computational and experimental scientists. Our center is engineered to be directly responsive to those needs and to deliver solutions for use in DOE's large open computing facilities. The research and development directly target data understanding problems provided by our scientific application stakeholders. VACET draws from a diverse set of visualization technology ranging from production quality applications and application frameworks to state-of-the-art algorithms for visualization, analysis, analytics, data manipulation, and data management

  13. Research Institute for Advanced Computer Science

    Science.gov (United States)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2000-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center. It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. Ames has been designated NASA's Center of Excellence in Information Technology. In this capacity, Ames is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA Ames and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth; (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking. Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a

  14. Center for Computer Security newsletter. Volume 2, Number 3

    Energy Technology Data Exchange (ETDEWEB)

    None

    1983-05-01

    The Fifth Computer Security Group Conference was held November 16 to 18, 1982, at the Knoxville Hilton in Knoxville, Tennessee. Attending were 183 people, representing the Department of Energy, DOE contractors, other government agencies, and vendor organizations. In these papers are abridgements of most of the papers presented in Knoxville. Less than half-a-dozen speakers failed to furnish either abstracts or full-text papers of their Knoxville presentations.

  15. Survey of Storage and Fault Tolerance Strategies Used in Cloud Computing

    Science.gov (United States)

    Ericson, Kathleen; Pallickara, Shrideep

    Cloud computing has gained significant traction in recent years. Companies such as Google, Amazon and Microsoft have been building massive data centers over the past few years. Spanning geographic and administrative domains, these data centers tend to be built out of commodity desktops with the total number of computers managed by these companies being in the order of millions. Additionally, the use of virtualization allows a physical node to be presented as a set of virtual nodes resulting in a seemingly inexhaustible set of computational resources. By leveraging economies of scale, these data centers can provision cpu, networking, and storage at substantially reduced prices which in turn underpins the move by many institutions to host their services in the cloud.

  16. Quantum computing with defects.

    Science.gov (United States)

    Weber, J R; Koehl, W F; Varley, J B; Janotti, A; Buckley, B B; Van de Walle, C G; Awschalom, D D

    2010-05-11

    Identifying and designing physical systems for use as qubits, the basic units of quantum information, are critical steps in the development of a quantum computer. Among the possibilities in the solid state, a defect in diamond known as the nitrogen-vacancy (NV(-1)) center stands out for its robustness--its quantum state can be initialized, manipulated, and measured with high fidelity at room temperature. Here we describe how to systematically identify other deep center defects with similar quantum-mechanical properties. We present a list of physical criteria that these centers and their hosts should meet and explain how these requirements can be used in conjunction with electronic structure theory to intelligently sort through candidate defect systems. To illustrate these points in detail, we compare electronic structure calculations of the NV(-1) center in diamond with those of several deep centers in 4H silicon carbide (SiC). We then discuss the proposed criteria for similar defects in other tetrahedrally coordinated semiconductors.

  17. 78 FR 73195 - Privacy Act of 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching...

    Science.gov (United States)

    2013-12-05

    ... 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching Program Match No. 1312 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS... Privacy Act of 1974 (5 U.S.C. 552a), as amended, this notice announces the renewal of a CMP that CMS plans...

  18. High Performance Computing and Storage Requirements for Biological and Environmental Research Target 2017

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Wasserman, Harvey [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)

    2013-05-01

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In addition to large-­scale computing and storage resources NERSC provides support and expertise that help scientists make efficient use of its systems. The latest review revealed several key requirements, in addition to achieving its goal of characterizing BER computing and storage needs.

  19. Analysis on the security of cloud computing

    Science.gov (United States)

    He, Zhonglin; He, Yuhua

    2011-02-01

    Cloud computing is a new technology, which is the fusion of computer technology and Internet development. It will lead the revolution of IT and information field. However, in cloud computing data and application software is stored at large data centers, and the management of data and service is not completely trustable, resulting in safety problems, which is the difficult point to improve the quality of cloud service. This paper briefly introduces the concept of cloud computing. Considering the characteristics of cloud computing, it constructs the security architecture of cloud computing. At the same time, with an eye toward the security threats cloud computing faces, several corresponding strategies are provided from the aspect of cloud computing users and service providers.

  20. Shredder: GPU-Accelerated Incremental Storage and Computation

    OpenAIRE

    Bhatotia, Pramod; Rodrigues, Rodrigo; Verma, Akshat

    2012-01-01

    Redundancy elimination using data deduplication and incremental data processing has emerged as an important technique to minimize storage and computation requirements in data center computing. In this paper, we present the design, implementation and evaluation of Shredder, a high performance content-based chunking framework for supporting incremental storage and computation systems. Shredder exploits the massively parallel processing power of GPUs to overcome the CPU bottlenecks of content-ba...

  1. 78 FR 30318 - Center for Scientific Review; Notice of Closed Meetings

    Science.gov (United States)

    2013-05-22

    ... Computational Mass-Spectrometry. Date: June 19-21, 2013. Time: 7:00 p.m. to 1:00 p.m. Agenda: To review and... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Center for Scientific Review... personal privacy. Name of Committee: Center for Scientific Review Special Emphasis Panel; Member Conflict...

  2. Efficient workload management in geographically distributed data centers leveraging autoregressive models

    Science.gov (United States)

    Altomare, Albino; Cesario, Eugenio; Mastroianni, Carlo

    2016-10-01

    The opportunity of using Cloud resources on a pay-as-you-go basis and the availability of powerful data centers and high bandwidth connections are speeding up the success and popularity of Cloud systems, which is making on-demand computing a common practice for enterprises and scientific communities. The reasons for this success include natural business distribution, the need for high availability and disaster tolerance, the sheer size of their computational infrastructure, and/or the desire to provide uniform access times to the infrastructure from widely distributed client sites. Nevertheless, the expansion of large data centers is resulting in a huge rise of electrical power consumed by hardware facilities and cooling systems. The geographical distribution of data centers is becoming an opportunity: the variability of electricity prices, environmental conditions and client requests, both from site to site and with time, makes it possible to intelligently and dynamically (re)distribute the computational workload and achieve as diverse business goals as: the reduction of costs, energy consumption and carbon emissions, the satisfaction of performance constraints, the adherence to Service Level Agreement established with users, etc. This paper proposes an approach that helps to achieve the business goals established by the data center administrators. The workload distribution is driven by a fitness function, evaluated for each data center, which weighs some key parameters related to business objectives, among which, the price of electricity, the carbon emission rate, the balance of load among the data centers etc. For example, the energy costs can be reduced by using a "follow the moon" approach, e.g. by migrating the workload to data centers where the price of electricity is lower at that time. Our approach uses data about historical usage of the data centers and data about environmental conditions to predict, with the help of regressive models, the values of the

  3. Handbook for the Computer Security Certification of Trusted Systems

    National Research Council Canada - National Science Library

    Weissman, Clark

    1995-01-01

    Penetration testing is required for National Computer Security Center (NCSC) security evaluations of systems and products for the B2, B3, and A1 class ratings of the Trusted Computer System Evaluation Criteria (TCSEC...

  4. Energy efficient data centers

    Energy Technology Data Exchange (ETDEWEB)

    Tschudi, William; Xu, Tengfang; Sartor, Dale; Koomey, Jon; Nordman, Bruce; Sezgen, Osman

    2004-03-30

    through extensive participation with data center professionals, examination of case study findings, and participation in data center industry meetings and workshops. Industry partners enthusiastically provided valuable insight into current practice, and helped to identify areas where additional public interest research could lead to significant efficiency improvement. This helped to define and prioritize the research agenda. The interaction involved industry representatives with expertise in all aspects of data center facilities, including specialized facility infrastructure systems and computing equipment. In addition to the input obtained through industry workshops, LBNL's participation in a three-day, comprehensive design ''charrette'' hosted by the Rocky Mountain Institute (RMI) yielded a number of innovative ideas for future research.

  5. Development of an instrument to measure health center (HC) personnel's computer use, knowledge and functionality demand for HC computerized information system in Thailand.

    Science.gov (United States)

    Kijsanayotin, Boonchai; Pannarunothai, Supasit; Speedie, Stuart

    2005-01-01

    Knowledge about socio-technical aspects of information technology (IT) is vital for the success of health IT projects. The Thailand health administration anticipates using health IT to support the recently implemented national universal health care system. However, the national knowledge associate with the socio-technical aspects of health IT has not been studied in Thailand. A survey instrument measuring Thai health center (HC) personnel's computer use, basic IT knowledge and HC computerized information system functionality needs was developed. The instrument reveals acceptable test-retest reliability and reasonable internal consistency of the measures. The future nation-wide demonstration study will benefit from this study.

  6. SCELib3.0: The new revision of SCELib, the parallel computational library of molecular properties in the Single Center Approach

    Science.gov (United States)

    Sanna, N.; Baccarelli, I.; Morelli, G.

    2009-12-01

    SCELib is a computer program which implements the Single Center Expansion (SCE) method to describe molecular electronic densities and the interaction potentials between a charged projectile (electron or positron) and a target molecular system. The first version (CPC Catalog identifier ADMG_v1_0) was submitted to the CPC Program Library in 2000, and version 2.0 (ADMG_v2_0) was submitted in 2004. We here announce the new release 3.0 which presents additional features with respect to the previous versions aiming at a significative enhance of its capabilities to deal with larger molecular systems. SCELib 3.0 allows for ab initio effective core potential (ECP) calculations of the molecular wavefunctions to be used in the SCE method in addition to the standard all-electron description of the molecule. The list of supported architectures has been updated and the code has been ported to platforms based on accelerating coprocessors, such as the NVIDIA GPGPU and the new parallel model adopted is able to efficiently run on a mixed many-core computing system. Program summaryProgram title: SCELib3.0 Catalogue identifier: ADMG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADMG_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2 018 862 No. of bytes in distributed program, including test data, etc.: 4 955 014 Distribution format: tar.gz Programming language: C Compilers used: xlc V8.x, Intel C V10.x, Portland Group V7.x, nvcc V2.x Computer: All SMP platforms based on AIX, Linux and SUNOS operating systems over SPARC, POWER, Intel Itanium2, X86, em64t and Opteron processors Operating system: SUNOS, IBM AIX, Linux RedHat (Enterprise), Linux SuSE (SLES) Has the code been vectorized or parallelized?: Yes. 1 to 32 (CPU or GPU) used RAM: Up to 32 GB depending on the molecular

  7. Noise-Resilient Quantum Computing with a Nitrogen-Vacancy Center and Nuclear Spins.

    Science.gov (United States)

    Casanova, J; Wang, Z-Y; Plenio, M B

    2016-09-23

    Selective control of qubits in a quantum register for the purposes of quantum information processing represents a critical challenge for dense spin ensembles in solid-state systems. Here we present a protocol that achieves a complete set of selective electron-nuclear gates and single nuclear rotations in such an ensemble in diamond facilitated by a nearby nitrogen-vacancy (NV) center. The protocol suppresses internuclear interactions as well as unwanted coupling between the NV center and other spins of the ensemble to achieve quantum gate fidelities well exceeding 99%. Notably, our method can be applied to weakly coupled, distant spins representing a scalable procedure that exploits the exceptional properties of nuclear spins in diamond as robust quantum memories.

  8. Efficient coherent driving of NV centers in a YIG-nanodiamond hybrid platform

    Science.gov (United States)

    Andrich, Paolo; de Las Casas, Charles F.; Liu, Xiaoying; Bretscher, Hope L.; Nealey, Paul F.; Awschalom, David D.; Heremans, F. Joseph

    The nitrogen-vacancy (NV) center in diamond is an ideal candidate for room temperature quantum computing and sensing applications. These schemes rely on magnetic dipolar interactions between the NV centers and other paramagnetic centers, imposing a stringent limit on the spin-to-spin separation. For instance, creating multi-qubit entanglement requires two NV centers to be within a few nanometers of each other, limiting the possibility for individual optical and microwave (MW) control. Moreover, to sense spins external to the diamond lattice the NV centers need to be within few nanometers from the surface, where their coherence properties are strongly reduced. In this work, we address these limitations using a hybrid YIG-nanodiamond platform where propagating spin-waves (SWs) are used to mediate the interaction between a MW source and a NV center ensemble, thereby relaxing the requirements imposed by dipolar interactions. In particular, we show that SWs can be used to amplify a MW signal detected by the NV centers by more than two orders of magnitude, allowing us to obtain ultra-low energy SW-driven coherent control of the NV centers. These results demonstrate the potentials of YIG-ND hybrid systems for the realization of enhanced quantum sensing and scalable computing devices. This work is supported by the ARO MURI program and the AFOSR.

  9. Center for Programming Models for Scalable Parallel Computing - Towards Enhancing OpenMP for Manycore and Heterogeneous Nodes

    Energy Technology Data Exchange (ETDEWEB)

    Barbara Chapman

    2012-02-01

    OpenMP was not well recognized at the beginning of the project, around year 2003, because of its limited use in DoE production applications and the inmature hardware support for an efficient implementation. Yet in the recent years, it has been graduately adopted both in HPC applications, mostly in the form of MPI+OpenMP hybrid code, and in mid-scale desktop applications for scientific and experimental studies. We have observed this trend and worked deligiently to improve our OpenMP compiler and runtimes, as well as to work with the OpenMP standard organization to make sure OpenMP are evolved in the direction close to DoE missions. In the Center for Programming Models for Scalable Parallel Computing project, the HPCTools team at the University of Houston (UH), directed by Dr. Barbara Chapman, has been working with project partners, external collaborators and hardware vendors to increase the scalability and applicability of OpenMP for multi-core (and future manycore) platforms and for distributed memory systems by exploring different programming models, language extensions, compiler optimizations, as well as runtime library support.

  10. Cooperative visualization and simulation in a supercomputer environment

    International Nuclear Information System (INIS)

    Ruehle, R.; Lang, U.; Wierse, A.

    1993-01-01

    The article takes a closer look on the requirements being imposed by the idea to integrate all the components into a homogeneous software environment. To this end several methods for the distribtuion of applications in dependence of certain problem types are discussed. The currently available methods at the University of Stuttgart Computer Center for the distribution of applications are further explained. Finally the aims and characteristics of a European sponsored project, called PAGEIN, are explained, which fits perfectly into the line of developments at RUS. The aim of the project is to experiment with future cooperative working modes of aerospace scientists in a high speed distributed supercomputing environment. Project results will have an impact on the development of real future scientific application environments. (orig./DG)

  11. A model for calculating the optimal replacement interval of computer systems

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1981-08-01

    A mathematical model for calculating the optimal replacement interval of computer systems is described. This model is made to estimate the best economical interval of computer replacement when computing demand, cost and performance of computer, etc. are known. The computing demand is assumed to monotonously increase every year. Four kinds of models are described. In the model 1, a computer system is represented by only a central processing unit (CPU) and all the computing demand is to be processed on the present computer until the next replacement. On the other hand in the model 2, the excessive demand is admitted and may be transferred to other computing center and processed costly there. In the model 3, the computer system is represented by a CPU, memories (MEM) and input/output devices (I/O) and it must process all the demand. Model 4 is same as model 3, but the excessive demand is admitted to be processed in other center. (1) Computing demand at the JAERI, (2) conformity of Grosch's law for the recent computers, (3) replacement cost of computer systems, etc. are also described. (author)

  12. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  13. Research Centers & Consortia | College of Engineering & Applied Science

    Science.gov (United States)

    Academics Admission Student Life Research Schools & Colleges Libraries Athletics Centers & ; Applied Science Powerful Ideas. Proven Results. Search for: Go This site All UWM Search Site Menu Skip to content Academics Undergraduate Programs Majors Minors Integrated Bachelor/Master Degree Applied Computing

  14. Rapid guiding center calculations

    International Nuclear Information System (INIS)

    White, R.B.

    1995-04-01

    Premature loss of high energy particles, and in particular fusion alpha particles, is very deleterious in a fusion reactor. Because of this it is necessary to make long-time simulations, on the order of the alpha particle slowing down time, with a number of test particles sufficient to give predictions with reasonable statistical accuracy. Furthermore it is desirable to do this for a large number of equilibria with different characteristic magnetic field ripple, to best optimize engineering designs. In addition, modification of the particle distribution due to magnetohydrodynamic (MHD) modes such as the saw tooth mode present in the plasma can be important, and this effect requires additional simulation. Thus the large number of necessary simulations means any increase of computing speed in guiding center codes is an important improvement in predictive capability. Previous guiding center codes using numerical equilibria such as ORBIT evaluated the local field strength and ripple magnitude using Lagrangian interpolation on a grid. Evaluation of these quantities four times per time step (using a fourth order Runge-Kutta routine) constitutes the major computational effort of the code. In the present work the authors represent the field quantities through an expansion in terms of pseudo-cartesian coordinates formed from the magnetic coordinates. The simplicity of the representation gives four important advantages over previous methods

  15. [Incentives of German Rehabilitation Centers to Implement Screening Strategies for the Prevention of Methicillin-resistant Staphylococcus aureus (MRSA) Transmissions and Infections].

    Science.gov (United States)

    Claus, F; Ried, W

    2015-06-01

    The colonization with Methicillin-resistant Staphylococcus aureus (MRSA) imposes a risk on the patient herself as well as on other patients and on healthcare professionals because, in the case of an infection, substantial health problems will arise. Moreover, additional costs for health care will occur as well. This paper examines the incentives of German rehabilitation centers to implement prevention measures in order to avert MRSA transmissions and infections. Relying on a decision tree analysis, the expected healthcare cost per capita is calculated for the 3 strategies general screening, risk-based screening, both upon admission, and no screening at all. The values of the relevant parameters are identified by a review of the published literature. From the perspective of a rehabilitation center, undertaking no screening at all minimizes the expected cost of treatment while the first strategy causes the highest cost. This ordering is robust with respect to multivariate sensitivity analyses. In Germany, rehabilitation centers currently are not reimbursed for the implementation of additional prevention measures against MRSA. Hence, as our analysis demonstrates, the financial incentive to implement MRSA screening turns out to be rather weak. This could well be inefficient for society because a substantial part of the benefit arising on other agents is not taken into account. Our results can be used to indicate changes in the remuneration system that would provide rehabilitation centers with an appropriate incentive for MRSA prevention. Moreover, hygiene regulations enacted recently such as the change in the Infection Prevention Act or the Medical Hygiene regulations emphasize the significance of an appropriate hygiene regimen, thus fostering MRSA prevention. © Georg Thieme Verlag KG Stuttgart · New York.

  16. Computational Toxicology as Implemented by the US EPA ...

    Science.gov (United States)

    Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environmental Protection Agency EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in air, water, and hazardous-waste sites. The Office of Research and Development (ORD) Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the U.S. EPA Science to Achieve Results (STAR) program. Together these elements form the key components in the implementation of both the initial strategy, A Framework for a Computational Toxicology Research Program (U.S. EPA, 2003), and the newly released The U.S. Environmental Protection Agency's Strategic Plan for Evaluating the T

  17. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility.

    Science.gov (United States)

    Jaschob, Daniel; Riffle, Michael

    2012-07-30

    Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  18. Dose Reduction and Dose Management in Computed Tomography - State of the Art.

    Science.gov (United States)

    Zinsser, Dominik; Marcus, Roy; Othman, Ahmed E; Bamberg, Fabian; Nikolaou, Konstantin; Flohr, Thomas; Notohamiprodjo, Mike

    2018-03-13

     For years, the number of performed CT examinations has been rising. At the same time, computed tomography became more dose efficient. The aim of this article is to give an overview about the state of the art in dose reduction in CT and to highlight currently available tools in dose management.  By performing a literature research on Pubmed regarding dose reduction in CT, relevant articles were identified and analyzed.  Technical innovations with individual adaptation of tube current and voltage as well as iterative image reconstruction enable a considerable dose reduction with preserved image quality. At the same time, dedicated software tools are able to handle huge amounts of data and allow to optimize existing examination protocols.   · CT examinations are increasingly performed and contribute considerably to non-natural radiation exposure.. · A correct indication is crucial for each CT examination.. · The examination protocol has to be tailored to the medical question and patient.. · Multiple technical innovations enable considerable dose reduction with constant image quality.. · Dose management with dedicated software tools gains importance.. · Zinsser D, Marcus R, Othman AE et al. Dose reduction and dose management in computed tomography - State of the art. Fortschr Röntgenstr 2018; DOI: 10.1055/s-0044-101261. © Georg Thieme Verlag KG Stuttgart · New York.

  19. Exploiting the Potential of Data Centers in the Smart Grid

    Science.gov (United States)

    Wang, Xiaoying; Zhang, Yu-An; Liu, Xiaojing; Cao, Tengfei

    As the number of cloud computing data centers grows rapidly in recent years, from the perspective of smart grid, they are really large and noticeable electric load. In this paper, we focus on the important role and the potential of data centers as controllable loads in the smart grid. We reviewed relevant research in the area of letting data centers participate in the ancillary services market and demand response programs of the grid, and further investigate the possibility of exploiting the impact of data center placement on the grid. Various opportunities and challenges are summarized, which could provide more chances for researches to explore this field.

  20. Inleiding: 'History of computing'. Geschiedschrijving over computers en computergebruik in Nederland

    Directory of Open Access Journals (Sweden)

    Adrienne van den Boogaard

    2008-06-01

    Full Text Available Along with the international trends in history of computing, Dutch contributions over the past twenty years moved away from a focus on machinery to the broader scope of use of computers, appropriation of computing technologies in various traditions, labour relations and professionalisation issues, and, lately, software.It is only natural that an emerging field like computer science sets out to write its genealogy and canonise the important steps in its intellectual endeavour. It is fair to say that a historiography diverging from such “home” interest, started in 1987 with the work of Eda Kranakis – then active in The Netherlands – commissioned by the national bureau for technology assessment, and Gerard Alberts, turning a commemorative volume of the Mathematical Center into a history of the same institute. History of computing in The Netherlands made a major leap in the spring of 1994 when Dirk de Wit, Jan van den Ende and Ellen van Oost defended their dissertations, on the roads towards adoption of computing technology in banking, in science and engineering, and on the gender aspect in computing. Here, history of computing had already moved from machines to the use of computers. The three authors joined Gerard Alberts and Onno de Wit in preparing a volume on the rise of IT in The Netherlands, the sequel of which in now in preparation in a team lead by Adrienne van den Bogaard.Dutch research reflected the international attention for professionalisation issues (Ensmenger, Haigh very early on in the dissertation by Ruud van Dael, Something to do with computers (2001 revealing how occupations dealing with computers typically escape the pattern of closure by professionalisation as expected by the, thus outdated, sociology of professions. History of computing not only takes use and users into consideration, but finally, as one may say, confronts the technological side of putting the machine to use, software, head on. The groundbreaking works

  1. Sensitive Data Protection Based on Intrusion Tolerance in Cloud Computing

    OpenAIRE

    Jingyu Wang; xuefeng Zheng; Dengliang Luo

    2011-01-01

    Service integration and supply on-demand coming from cloud computing can significantly improve the utilization of computing resources and reduce power consumption of per service, and effectively avoid the error of computing resources. However, cloud computing is still facing the problem of intrusion tolerance of the cloud computing platform and sensitive data of new enterprise data center. In order to address the problem of intrusion tolerance of cloud computing platform and sensitive data in...

  2. Active Computer Network Defense: An Assessment

    Science.gov (United States)

    2001-04-01

    sufficient base of knowledge in information technology can be assumed to be working on some form of computer network warfare, even if only defensive in...the Defense Information Infrastructure (DII) to attack. Transmission Control Protocol/ Internet Protocol (TCP/IP) networks are inherently resistant to...aims to create this part of information superiority, and computer network defense is one of its fundamental components. Most of these efforts center

  3. Center for Technology for Advanced Scientific Component Software (TASCS)

    Energy Technology Data Exchange (ETDEWEB)

    Damevski, Kostadin [Virginia State Univ., Petersburg, VA (United States)

    2009-03-30

    A resounding success of the Scientific Discover through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedened computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative hig-performance scientific computing.

  4. · Attitude towards Computers and Classroom Management of Language School Teachers

    Directory of Open Access Journals (Sweden)

    Sara Jalali

    2014-07-01

    Full Text Available Computer-assisted language learning (CALL is the realization of computers in schools and universities which has potentially enhanced the language learning experience inside the classrooms. The integration of the technologies into the classroom demands that the teachers adopt a number of classroom management procedures to maintain a more learner-centered and conducive language learning environment. The current study explored the relationship between computer attitudes and behavior and instructional classroom management approaches implemented by English institute teachers. In so doing, a total of 105 male (n = 27 and female (n = 78 EFL teachers participated in this study. A computer attitude questionnaire adapted from Albirini (2006 and a Behavior and Instructional Management Scale (BIMS adopted from Martin and Sass (2010 were benefitted from for the purpose of collecting the data. The results of the Pearson Correlation Coefficient revealed that there were no significant relationships between attitude and behavior and instructional management across gender. However, it was found that the more male teachers experience tendency toward using computers in their classes, the more teacher-centered their classes become. In addition, the more female teachers are prone to use computers in their classes, the more student-centered and lenient their classes become.

  5. Multi-Center Electronic Structure Calculations for Plasma Equation of State

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, B G; Johnson, D D; Alam, A

    2010-12-14

    We report on an approach for computing electronic structure utilizing solid-state multi-center scattering techniques, but generalized to finite temperatures to model plasmas. This approach has the advantage of handling mixtures at a fundamental level without the imposition of ad hoc continuum lowering models, and incorporates bonding and charge exchange, as well as multi-center effects in the calculation of the continuum density of states.

  6. SCELib2: the new revision of SCELib, the parallel computational library of molecular properties in the single center approach

    Science.gov (United States)

    Sanna, N.; Morelli, G.

    2004-09-01

    In this paper we present the new version of the SCELib program (CPC Catalogue identifier ADMG) a full numerical implementation of the Single Center Expansion (SCE) method. The physics involved is that of producing the SCE description of molecular electronic densities, of molecular electrostatic potentials and of molecular perturbed potentials due to a point negative or positive charge. This new revision of the program has been optimized to run in serial as well as in parallel execution mode, to support a larger set of molecular symmetries and to permit the restart of long-lasting calculations. To measure the performance of this new release, a comparative study has been carried out on the most powerful computing architectures in serial and parallel runs. The results of the calculations reported in this paper refer to real cases medium to large molecular systems and they are reported in full details to benchmark at best the parallel architectures the new SCELib code will run on. Program summaryTitle of program: SCELib2 Catalogue identifier: ADGU Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADGU Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Reference to previous versions: Comput. Phys. Commun. 128 (2) (2000) 139 (CPC catalogue identifier: ADMG) Does the new version supersede the original program?: Yes Computer for which the program is designed and others on which it has been tested: HP ES45 and rx2600, SUN ES4500, IBM SP and any single CPU workstation based on Alpha, SPARC, POWER, Itanium2 and X86 processors Installations: CASPUR, local Operating systems under which the program has been tested: HP Tru64 V5.X, SUNOS V5.8, IBM AIX V5.X, Linux RedHat V8.0 Programming language used: C Memory required to execute with typical data: 10 Mwords. Up to 2000 Mwords depending on the molecular system and runtime parameters No. of bits in a word: 64 No. of processors used: 1 to 32 Has the code been vectorized or parallelized?: Yes

  7. Plasma Science and Innovation Center (PSI-Center) at Washington, Wisconsin, and Utah State, ARRA Supplement

    Energy Technology Data Exchange (ETDEWEB)

    Sovinec, Carl [Univ. of Wisconsin-Madison, Madison, WI (United States)

    2018-03-14

    The objective of the Plasma Science and Innovation Center (PSI-Center) is to develop and deploy computational models that simulate conditions in smaller, concept-exploration plasma experiments. The PSIC group at the University of Wisconsin-Madison, led by Prof. Carl Sovinec, uses and enhances the Non-Ideal Magnetohydrodynamics with Rotation, Open Discussion (NIMROD) code, to simulate macroscopic plasma dynamics in a number of magnetic confinement configurations. These numerical simulations provide information on how magnetic fields and plasma flows evolve over all three spatial dimensions, which supplements the limited access of diagnostics in plasma experiments. The information gained from simulation helps explain how plasma evolves. It is also used to engineer more effective plasma confinement systems, reducing the need for building many experiments to cover the physical parameter space. The ultimate benefit is a more cost-effective approach to the development of fusion energy for peaceful power production. The supplemental funds provided by the American Recovery and Reinvestment Act of 2009 were used to purchase computer components that were assembled into a 48-core system with 256 Gb of shared memory. The system was engineered and constructed by the group's system administrator at the time, Anthony Hammond. It was successfully used by then graduate student, Dr. John O'Bryan, for computing magnetic relaxation dynamics that occur during experimental tests of non-inductive startup in the Pegasus Toroidal Experiment (pegasus.ep.wisc.edu). Dr. O'Bryan's simulations provided the first detailed explanation of how the driven helical filament of electrical current evolves into a toroidal tokamak-like plasma configuration.

  8. Cloud computing can simplify HIT infrastructure management.

    Science.gov (United States)

    Glaser, John

    2011-08-01

    Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.

  9. Developmental Stages in School Computer Use: Neither Marx Nor Piaget.

    Science.gov (United States)

    Lengel, James G.

    Karl Marx's theory of stages can be applied to computer use in the schools. The first stage, the P Stage, comprises the entry of the computer into the school. Computer use at this stage is personal and tends to center around one personality. Social studies teachers are seldom among this select few. The second stage of computer use, the D Stage, is…

  10. Surface phenomena in thermionic research. Oberflaechenphysikalische Probleme der Thermionik. Vortrage aus der Round-table- Konferenz im Institut fuer Energiewandlung und Elektrische Antriebe der DFVLR in Stuttgart 1972 [Nine papers, 4 summaries

    Energy Technology Data Exchange (ETDEWEB)

    Henne, R. (ed.)

    1973-07-15

    This report compiles papers concerning especially recent developments in work function theory and work function measurements, which were given at a round table conference about surface phenomena in thermionic research, arranged by the 'Deutsche Forschungs- und Versuchsanstalt fuer Luft- und Raumfahrt' in Stuttgart on Nov. 20 and Dec. 1, 1972. 9 papers are printed completely, 4 others in form of a summary. Two of them concern the work function of uncovered surfaces, 2 others show the influence of adsorbed electropositive elements (Cs, Sr) on work function. In 4 papers the coadsorption of electropositive (Cs, Sr, resp. Ba) and electronegative (O {sub 2}) elements and their influence on work function of different surfaces are discussed. Finally a paper is added, describing the development of Sr-Cs-alloys, which are of interest for the generation of the atmosphere of a Sr-Cs-converter by means of one single reservoir. (auth)

  11. Surface phenomena in thermionic research. Oberflaechenphysikalische Probleme der Thermionik. Vortrage aus der Round-table- Konferenz im Institut fuer Energiewandlung und Elektrische Antriebe der DFVLR in Stuttgart 1972 [Nine papers, 4 summaries

    Energy Technology Data Exchange (ETDEWEB)

    Henne, R [ed.

    1973-07-15

    This report compiles papers concerning especially recent developments in work function theory and work function measurements, which were given at a round table conference about surface phenomena in thermionic research, arranged by the 'Deutsche Forschungs- und Versuchsanstalt fuer Luft- und Raumfahrt' in Stuttgart on Nov. 20 and Dec. 1, 1972. 9 papers are printed completely, 4 others in form of a summary. Two of them concern the work function of uncovered surfaces, 2 others show the influence of adsorbed electropositive elements (Cs, Sr) on work function. In 4 papers the coadsorption of electropositive (Cs, Sr, resp. Ba) and electronegative (O {sub 2}) elements and their influence on work function of different surfaces are discussed. Finally a paper is added, describing the development of Sr-Cs-alloys, which are of interest for the generation of the atmosphere of a Sr-Cs-converter by means of one single reservoir. (auth)

  12. Introduction to computer networking

    CERN Document Server

    Robertazzi, Thomas G

    2017-01-01

    This book gives a broad look at both fundamental networking technology and new areas that support it and use it. It is a concise introduction to the most prominent, recent technological topics in computer networking. Topics include network technology such as wired and wireless networks, enabling technologies such as data centers, software defined networking, cloud and grid computing and applications such as networks on chips, space networking and network security. The accessible writing style and non-mathematical treatment makes this a useful book for the student, network and communications engineer, computer scientist and IT professional. • Features a concise, accessible treatment of computer networking, focusing on new technological topics; • Provides non-mathematical introduction to networks in their most common forms today;< • Includes new developments in switching, optical networks, WiFi, Bluetooth, LTE, 5G, and quantum cryptography.

  13. About Security Solutions in Fog Computing

    Directory of Open Access Journals (Sweden)

    Eugen Petac

    2016-01-01

    Full Text Available The key for improving a system's performance, its security and reliability is to have the dataprocessed locally in remote data centers. Fog computing extends cloud computing through itsservices to devices and users at the edge of the network. Through this paper it is explored the fogcomputing environment. Security issues in this area are also described. Fog computing providesthe improved quality of services to the user by complementing shortages of cloud in IoT (Internet ofThings environment. Our proposal, named Adaptive Fog Computing Node Security Profile(AFCNSP, which is based security Linux solutions, will get an improved security of fog node withrich feature sets.

  14. Readiness of healthcare providers for eHealth: the case from primary healthcare centers in Lebanon.

    Science.gov (United States)

    Saleh, Shadi; Khodor, Rawya; Alameddine, Mohamad; Baroud, Maysa

    2016-11-10

    eHealth can positively impact the efficiency and quality of healthcare services. Its potential benefits extend to the patient, healthcare provider, and organization. Primary healthcare (PHC) settings may particularly benefit from eHealth. In these settings, healthcare provider readiness is key to successful eHealth implementation. Accordingly, it is necessary to explore the potential readiness of providers to use eHealth tools. Therefore, the purpose of this study was to assess the readiness of healthcare providers working in PHC centers in Lebanon to use eHealth tools. A self-administered questionnaire was used to assess participants' socio-demographics, computer use, literacy, and access, and participants' readiness for eHealth implementation (appropriateness, management support, change efficacy, personal beneficence). The study included primary healthcare providers (physicians, nurses, other providers) working in 22 PHC centers distributed across Lebanon. Descriptive and bivariate analyses (ANOVA, independent t-test, Kruskal Wallis, Tamhane's T2) were used to compare participant characteristics to the level of readiness for the implementation of eHealth. Of the 541 questionnaires, 213 were completed (response rate: 39.4 %). The majority of participants were physicians (46.9 %), and nurses (26.8 %). Most physicians (54.0 %), nurses (61.4 %), and other providers (50.9 %) felt comfortable using computers, and had access to computers at their PHC center (physicians: 77.0 %, nurses: 87.7 %, others: 92.5 %). Frequency of computer use varied. The study found a significant difference for personal beneficence, management support, and change efficacy among different healthcare providers, and relative to participants' level of comfort using computers. There was a significant difference by level of comfort using computers and appropriateness. A significant difference was also found between those with access to computers in relation to personal beneficence and

  15. Communications among data and science centers

    Science.gov (United States)

    Green, James L.

    1990-01-01

    The ability to electronically access and query the contents of remote computer archives is of singular importance in space and earth sciences; the present evaluation of such on-line information networks' development status foresees swift expansion of their data capabilities and complexity, in view of the volumes of data that will continue to be generated by NASA missions. The U.S.'s National Space Science Data Center (NSSDC) manages NASA's largest science computer network, the Space Physics Analysis Network; a comprehensive account is given of the structure of NSSDC international access through BITNET, and of connections to the NSSDC available in the Americas via the International X.25 network.

  16. Scientific Computing Strategic Plan for the Idaho National Laboratory

    International Nuclear Information System (INIS)

    Whiting, Eric Todd

    2015-01-01

    Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory's (INL's) challenge and charge, and is central to INL's ongoing success. Computing is an essential part of INL's future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing number of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.

  17. Collection of reports on use of computation fund utilized in common in 1988

    International Nuclear Information System (INIS)

    1989-05-01

    Nuclear Physics Research Center, Osaka University, has provided the computation fund utilized in common since 1976 for supporting the computation related to the activities of the Center. When this computation fund is used, after finishing the use, the simple report of definite form (printed in RCNP-Z together with the report of the committee on computation fund utilized in common) and the detailed report concerning the contents of computation are to be presented. In the latter report, English abstract, explanation of the results obtained by computation and physical contents, new development, difficult point and the method of its solution in computation techniques, subroutine and function used for computation and their functions and block diagrams and so on are included. This book is the collection of the latter reports on the use of the computation fund utilized in common in fiscal year 1988. The invitation to the computation fund utilized in common is informed in December every year in RCNP-Z. (K.I.)

  18. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160; The ATLAS collaboration

    2016-01-01

    Fifteen Chinese High Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  19. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160

    2017-01-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  20. ACToR - Aggregated Computational Toxicology Resource

    International Nuclear Information System (INIS)

    Judson, Richard; Richard, Ann; Dix, David; Houck, Keith; Elloumi, Fathi; Martin, Matthew; Cathey, Tommy; Transue, Thomas R.; Spencer, Richard; Wolf, Maritja

    2008-01-01

    ACToR (Aggregated Computational Toxicology Resource) is a database and set of software applications that bring into one central location many types and sources of data on environmental chemicals. Currently, the ACToR chemical database contains information on chemical structure, in vitro bioassays and in vivo toxicology assays derived from more than 150 sources including the U.S. Environmental Protection Agency (EPA), Centers for Disease Control (CDC), U.S. Food and Drug Administration (FDA), National Institutes of Health (NIH), state agencies, corresponding government agencies in Canada, Europe and Japan, universities, the World Health Organization (WHO) and non-governmental organizations (NGOs). At the EPA National Center for Computational Toxicology, ACToR helps manage large data sets being used in a high-throughput environmental chemical screening and prioritization program called ToxCast TM

  1. CICART Center For Integrated Computation And Analysis Of Reconnection And Turbulence

    International Nuclear Information System (INIS)

    Bhattacharjee, Amitava

    2016-01-01

    CICART is a partnership between the University of New Hampshire (UNH) and Dartmouth College. CICART addresses two important science needs of the DoE: the basic understanding of magnetic reconnection and turbulence that strongly impacts the performance of fusion plasmas, and the development of new mathematical and computational tools that enable the modeling and control of these phenomena. The principal participants of CICART constitute an interdisciplinary group, drawn from the communities of applied mathematics, astrophysics, computational physics, fluid dynamics, and fusion physics. It is a main premise of CICART that fundamental aspects of magnetic reconnection and turbulence in fusion devices, smaller-scale laboratory experiments, and space and astrophysical plasmas can be viewed from a common perspective, and that progress in understanding in any of these interconnected fields is likely to lead to progress in others. The establishment of CICART has strongly impacted the education and research mission of a new Program in Integrated Applied Mathematics in the College of Engineering and Applied Sciences at UNH by enabling the recruitment of a tenure-track faculty member, supported equally by UNH and CICART, and the establishment of an IBM-UNH Computing Alliance. The proposed areas of research in magnetic reconnection and turbulence in astrophysical, space, and laboratory plasmas include the following topics: (A) Reconnection and secondary instabilities in large high-Lundquist-number plasmas, (B) Particle acceleration in the presence of multiple magnetic islands, (C) Gyrokinetic reconnection: comparison with fluid and particle-in-cell models, (D) Imbalanced turbulence, (E) Ion heating, and (F) Turbulence in laboratory (including fusion-relevant) experiments. These theoretical studies make active use of three high-performance computer simulation codes: (1) The Magnetic Reconnection Code, based on extended two-fluid (or Hall MHD) equations, in an Adaptive Mesh

  2. Conversion of SMART I. Zur Konvertierung von SMART I

    Energy Technology Data Exchange (ETDEWEB)

    Argyris, J H; Szimmat, J; William, K J [Stuttgart Univ. (TH) (Germany, F.R.). Inst. fuer Statik und Dynamik der Luft- und Raumfahrtkonstruktionen

    1977-01-01

    The report describes the conversion of the programming system SMART I on CDC, UNIVAC and IBM computers under the BMFT grant RK 21 I/SBB 31. There were four tasks for the development of a machine-independent SMART-version: a) Updating of the CDC source library (ca. 180.000 Fortran statements); b) Conversion into double precision; c) Primary installation on UNIVAC; d) Primary installation on IBM. The conversion of the SMART I program was carried out in cooperation with the consulting firms RIB, Stuttgart, and IKOSS Stuttgart, under the leadership of ISD.

  3. The VINEYARD project: Versatile Integrated Accelerator-based Heterogeneous Data Centers

    OpenAIRE

    Kachris, Christoforos; Soudris, Dimitrios; Gaydadjiev, Georgi; Nguyen, Huy-Nam

    2016-01-01

    Emerging applications like cloud computing and big data analytics have created the need for powerful centers hosting hundreds of thousands of servers. Currently, the data centers are based on general purpose processors that provide high flexibility but lacks the energy efficiency of customized accelerators. VINEYARD1 aims to develop novel servers based on programmable hardware accelerators. Furthermore, VINEYARD will develop an integrated framework for allowing end-users to seamlessly utilize...

  4. Energy-Aware Computation Offloading of IoT Sensors in Cloudlet-Based Mobile Edge Computing.

    Science.gov (United States)

    Ma, Xiao; Lin, Chuang; Zhang, Han; Liu, Jianwei

    2018-06-15

    Mobile edge computing is proposed as a promising computing paradigm to relieve the excessive burden of data centers and mobile networks, which is induced by the rapid growth of Internet of Things (IoT). This work introduces the cloud-assisted multi-cloudlet framework to provision scalable services in cloudlet-based mobile edge computing. Due to the constrained computation resources of cloudlets and limited communication resources of wireless access points (APs), IoT sensors with identical computation offloading decisions interact with each other. To optimize the processing delay and energy consumption of computation tasks, theoretic analysis of the computation offloading decision problem of IoT sensors is presented in this paper. In more detail, the computation offloading decision problem of IoT sensors is formulated as a computation offloading game and the condition of Nash equilibrium is derived by introducing the tool of a potential game. By exploiting the finite improvement property of the game, the Computation Offloading Decision (COD) algorithm is designed to provide decentralized computation offloading strategies for IoT sensors. Simulation results demonstrate that the COD algorithm can significantly reduce the system cost compared with the random-selection algorithm and the cloud-first algorithm. Furthermore, the COD algorithm can scale well with increasing IoT sensors.

  5. Human-Centered Design of Human-Computer-Human Dialogs in Aerospace Systems

    Science.gov (United States)

    Mitchell, Christine M.

    1998-01-01

    A series of ongoing research programs at Georgia Tech established a need for a simulation support tool for aircraft computer-based aids. This led to the design and development of the Georgia Tech Electronic Flight Instrument Research Tool (GT-EFIRT). GT-EFIRT is a part-task flight simulator specifically designed to study aircraft display design and single pilot interaction. ne simulator, using commercially available graphics and Unix workstations, replicates to a high level of fidelity the Electronic Flight Instrument Systems (EFIS), Flight Management Computer (FMC) and Auto Flight Director System (AFDS) of the Boeing 757/767 aircraft. The simulator can be configured to present information using conventional looking B757n67 displays or next generation Primary Flight Displays (PFD) such as found on the Beech Starship and MD-11.

  6. A Modified Artificial Bee Colony Algorithm for p-Center Problems

    Directory of Open Access Journals (Sweden)

    Alkın Yurtkuran

    2014-01-01

    Full Text Available The objective of the p-center problem is to locate p-centers on a network such that the maximum of the distances from each node to its nearest center is minimized. The artificial bee colony algorithm is a swarm-based meta-heuristic algorithm that mimics the foraging behavior of honey bee colonies. This study proposes a modified ABC algorithm that benefits from a variety of search strategies to balance exploration and exploitation. Moreover, random key-based coding schemes are used to solve the p-center problem effectively. The proposed algorithm is compared to state-of-the-art techniques using different benchmark problems, and computational results reveal that the proposed approach is very efficient.

  7. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility

    Directory of Open Access Journals (Sweden)

    Jaschob Daniel

    2012-07-01

    Full Text Available Abstract Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud” and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  8. Parallel Computing:. Some Activities in High Energy Physics

    Science.gov (United States)

    Willers, Ian

    This paper examines some activities in High Energy Physics that utilise parallel computing. The topic includes all computing from the proposed SIMD front end detectors, the farming applications, high-powered RISC processors and the large machines in the computer centers. We start by looking at the motivation behind using parallelism for general purpose computing. The developments around farming are then described from its simplest form to the more complex system in Fermilab. Finally, there is a list of some developments that are happening close to the experiments.

  9. User-centered design in brain-computer interfaces-a case study.

    Science.gov (United States)

    Schreuder, Martijn; Riccio, Angela; Risetti, Monica; Dähne, Sven; Ramsay, Andrew; Williamson, John; Mattia, Donatella; Tangermann, Michael

    2013-10-01

    The array of available brain-computer interface (BCI) paradigms has continued to grow, and so has the corresponding set of machine learning methods which are at the core of BCI systems. The latter have evolved to provide more robust data analysis solutions, and as a consequence the proportion of healthy BCI users who can use a BCI successfully is growing. With this development the chances have increased that the needs and abilities of specific patients, the end-users, can be covered by an existing BCI approach. However, most end-users who have experienced the use of a BCI system at all have encountered a single paradigm only. This paradigm is typically the one that is being tested in the study that the end-user happens to be enrolled in, along with other end-users. Though this corresponds to the preferred study arrangement for basic research, it does not ensure that the end-user experiences a working BCI. In this study, a different approach was taken; that of a user-centered design. It is the prevailing process in traditional assistive technology. Given an individual user with a particular clinical profile, several available BCI approaches are tested and - if necessary - adapted to him/her until a suitable BCI system is found. Described is the case of a 48-year-old woman who suffered from an ischemic brain stem stroke, leading to a severe motor- and communication deficit. She was enrolled in studies with two different BCI systems before a suitable system was found. The first was an auditory event-related potential (ERP) paradigm and the second a visual ERP paradigm, both of which are established in literature. The auditory paradigm did not work successfully, despite favorable preconditions. The visual paradigm worked flawlessly, as found over several sessions. This discrepancy in performance can possibly be explained by the user's clinical deficit in several key neuropsychological indicators, such as attention and working memory. While the auditory paradigm relies

  10. [Activities of Research Institute for Advanced Computer Science

    Science.gov (United States)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2001-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administrations missions. RIACS is located at the NASA Ames Research Center, Moffett Field, California. RIACS research focuses on the three cornerstones of IT research necessary to meet the future challenges of NASA missions: 1. Automated Reasoning for Autonomous Systems Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. 2. Human-Centered Computing Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities. 3. High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to analysis of large scientific datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply IT research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, visiting scientist programs and student summer programs, designed to encourage and facilitate collaboration between the university and NASA IT research communities.

  11. St. Luke's Medical Center: technologizing health care

    International Nuclear Information System (INIS)

    Tumanguil, S.S.

    1994-01-01

    The computerization of the St. Luke's Medical Center improved the hospital administration and management, particularly in nuclear medicine department. The use of computer-aided X-ray simulator machine and computerized linear accelerator machine in diagnosing and treating cancer are the most recent medical technological breakthroughs that benefited thousands of Filipino cancer patients. 4 photos

  12. VACET: Proposed SciDAC2 Visualization and Analytics Center for Enabling Technologies

    International Nuclear Information System (INIS)

    Bethel, W; Johnson, C; Hansen, C; Parker, S; Sanderson, A; Silva, C; Tricoche, X; Pascucci, V; Childs, H; Cohen, J; Duchaineau, M; Laney, D; Lindstrom, P; Ahern, S; Meredith, J; Ostrouchov, G; Joy, K; Hamann, B

    2006-01-01

    This project focuses on leveraging scientific visualization and analytics software technology as an enabling technology for increasing scientific productivity and insight. Advances in computational technology have resulted in an 'information big bang',' which in turn has created a significant data understanding challenge. This challenge is widely acknowledged to be one of the primary bottlenecks in contemporary science. The vision for our Center is to respond directly to that challenge by adapting, extending, creating when necessary and deploying visualization and data understanding technologies for our science stakeholders. Using an organizational model as a Visualization and Analytics Center for Enabling Technologies (VACET), we are well positioned to be responsive to the needs of a diverse set of scientific stakeholders in a coordinated fashion using a range of visualization, mathematics, statistics, computer and computational science and data management technologies

  13. Cloud Computing Databases: Latest Trends and Architectural Concepts

    OpenAIRE

    Tarandeep Singh; Parvinder S. Sandhu

    2011-01-01

    The Economic factors are leading to the rise of infrastructures provides software and computing facilities as a service, known as cloud services or cloud computing. Cloud services can provide efficiencies for application providers, both by limiting up-front capital expenses, and by reducing the cost of ownership over time. Such services are made available in a data center, using shared commodity hardware for computation and storage. There is a varied set of cloud services...

  14. Management of Virtual Machine as an Energy Conservation in Private Cloud Computing System

    Directory of Open Access Journals (Sweden)

    Fauzi Akhmad

    2016-01-01

    Full Text Available Cloud computing is a service model that is packaged in a base computing resources that can be accessed through the Internet on demand and placed in the data center. Data center architecture in cloud computing environments are heterogeneous and distributed, composed of a cluster of network servers with different capacity computing resources in different physical servers. The problems on the demand and availability of cloud services can be solved by fluctuating data center cloud through abstraction with virtualization technology. Virtual machine (VM is a representation of the availability of computing resources that can be dynamically allocated and reallocated on demand. In this study the consolidation of VM as energy conservation in Private Cloud Computing Systems with the target of process optimization selection policy and migration of the VM on the procedure consolidation. VM environment cloud data center to consider hosting a type of service a particular application at the instance VM requires a different level of computing resources. The results of the use of computing resources on a VM that is not balanced in physical servers can be reduced by using a live VM migration to achieve workload balancing. A practical approach used in developing OpenStack-based cloud computing environment by integrating Cloud VM and VM Placement selection procedure using OpenStack Neat VM consolidation. Following the value of CPU Time used as a fill to get the average value in MHz CPU utilization within a specific time period. The average value of a VM’s CPU utilization in getting from the current CPU_time reduced by CPU_time from the previous data retrieval multiplied by the maximum frequency of the CPU. The calculation result is divided by the making time CPU_time when it is reduced to the previous taking time CPU_time multiplied by milliseconds.

  15. Magnetic fusion energy and computers: the role of computing in magnetic fusion energy research and development

    International Nuclear Information System (INIS)

    1979-10-01

    This report examines the role of computing in the Department of Energy magnetic confinement fusion program. The present status of the MFECC and its associated network is described. The third part of this report examines the role of computer models in the main elements of the fusion program and discusses their dependence on the most advanced scientific computers. A review of requirements at the National MFE Computer Center was conducted in the spring of 1976. The results of this review led to the procurement of the CRAY 1, the most advanced scientific computer available, in the spring of 1978. The utilization of this computer in the MFE program has been very successful and is also described in the third part of the report. A new study of computer requirements for the MFE program was conducted during the spring of 1979 and the results of this analysis are presented in the forth part of this report

  16. Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds

    Science.gov (United States)

    Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni

    2012-09-01

    Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.

  17. Human Centered Design and Development for NASA's MerBoard

    Science.gov (United States)

    Trimble, Jay

    2003-01-01

    This viewgraph presentation provides an overview of the design and development process for NASA's MerBoard. These devices are large interactive display screens which can be shown on the user's computer, which will allow scientists in many locations to interpret and evaluate mission data in real-time. These tools are scheduled to be used during the 2003 Mars Exploration Rover (MER) expeditions. Topics covered include: mission overview, Mer Human Centered Computers, FIDO 2001 observations and MerBoard prototypes.

  18. 10th International Conference on Computer Simulation of Radiation Effects in Solids - COSIRES 2010. Abstracts and Programme

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2010-07-01

    COSIRES 2010 is the 10{sup th} International Conference on Computer Simulation of Radiation Effects in Solids. This series of conferences addresses the development and application of advanced computer modeling techniques to the study of phenomena taking place during interaction of energetic particles and clusters (from several eV to some MeV) with solids. Due to the continuous development of new theoretical methodologies and permanent increase of computer power this research field is growing fast. The application of computer simulations leads to a better understanding of basic microscopic processes taking place during and after irradiation. Fundamental understanding of such processes is often not accessible by experimental methods since they occur on very small time and length scales. However, computer simulation techniques are not only used for investigations of basic phenomena but also increasingly applied in the development of modern industrial technologies. Conference topics include, but are not limited to: I) Computer modeling of following phenomena: {center_dot} Sputtering; {center_dot} Formation and evolution of radiation defects in materials; {center_dot} Radiation responses of structural materials important for nuclear and fusion industry; {center_dot} Irradiation-induced evolution of surface topography and ripple formation; {center_dot} Ion beam synthesis of thin films and nanostructures; {center_dot} Ion-, electron and photon-induced physical and chemical effects at surfaces, interfaces and nanostructures; {center_dot} Irradiation-induced charge redistribution, electron excitation and electron-phonon interactions II) Development of new computer modeling protocols and interatomic potentials for investigation of radiation effects. The conference follows previous meetings that were held in Berlin/Germany (1992), Santa Barbara/USA (1994), Guildford/UK (1996), Okayama/Japan (1998), State College/USA (2000), Dresden/Germany (2002), Helsinki/Finland (2004

  19. Spectroscopic and computational study of a nonheme iron nitrosyl center in a biosynthetic model of nitric oxide reductase.

    Science.gov (United States)

    Chakraborty, Saumen; Reed, Julian; Ross, Matthew; Nilges, Mark J; Petrik, Igor D; Ghosh, Soumya; Hammes-Schiffer, Sharon; Sage, J Timothy; Zhang, Yong; Schulz, Charles E; Lu, Yi

    2014-02-24

    A major barrier to understanding the mechanism of nitric oxide reductases (NORs) is the lack of a selective probe of NO binding to the nonheme FeB center. By replacing the heme in a biosynthetic model of NORs, which structurally and functionally mimics NORs, with isostructural ZnPP, the electronic structure and functional properties of the FeB nitrosyl complex was probed. This approach allowed observation of the first S=3/2 nonheme {FeNO}(7) complex in a protein-based model system of NOR. Detailed spectroscopic and computational studies show that the electronic state of the {FeNO}(7) complex is best described as a high spin ferrous iron (S=2) antiferromagnetically coupled to an NO radical (S=1/2) [Fe(2+)-NO(.)]. The radical nature of the FeB -bound NO would facilitate N-N bond formation by radical coupling with the heme-bound NO. This finding, therefore, supports the proposed trans mechanism of NO reduction by NORs. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Computer-aided dispatching system design specification

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, M.G.

    1997-12-16

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol Operations Center. This document reflects the as-built requirements for the system that was delivered by GTE Northwest, Inc. This system provided a commercial off-the-shelf computer-aided dispatching system and alarm monitoring system currently in operations at the Hanford Patrol Operations Center, Building 2721E. This system also provides alarm back-up capability for the Plutonium Finishing Plant (PFP).

  1. Computer-aided dispatching system design specification

    International Nuclear Information System (INIS)

    Briggs, M.G.

    1997-01-01

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol Operations Center. This document reflects the as-built requirements for the system that was delivered by GTE Northwest, Inc. This system provided a commercial off-the-shelf computer-aided dispatching system and alarm monitoring system currently in operations at the Hanford Patrol Operations Center, Building 2721E. This system also provides alarm back-up capability for the Plutonium Finishing Plant (PFP)

  2. Spectrum of tablet computer use by medical students and residents at an academic medical center.

    Science.gov (United States)

    Robinson, Robert

    2015-01-01

    Introduction. The value of tablet computer use in medical education is an area of considerable interest, with preliminary investigations showing that the majority of medical trainees feel that tablet computers added value to the curriculum. This study investigated potential differences in tablet computer use between medical students and resident physicians. Materials & Methods. Data collection for this survey was accomplished with an anonymous online questionnaire shared with the medical students and residents at Southern Illinois University School of Medicine (SIU-SOM) in July and August of 2012. Results. There were 76 medical student responses (26% response rate) and 66 resident/fellow responses to this survey (21% response rate). Residents/fellows were more likely to use tablet computers several times daily than medical students (32% vs. 20%, p = 0.035). The most common reported uses were for accessing medical reference applications (46%), e-Books (45%), and board study (32%). Residents were more likely than students to use a tablet computer to access an electronic medical record (41% vs. 21%, p = 0.010), review radiology images (27% vs. 12%, p = 0.019), and enter patient care orders (26% vs. 3%, p e-Books, and to study for board exams. Residents were more likely to use tablet computers to complete clinical tasks. Conclusions. Tablet computer use among medical students and resident physicians was common in this survey. All learners used tablet computers for point of care references and board study. Resident physicians were more likely to use tablet computers to access the EMR, enter patient care orders, and review radiology studies. This difference is likely due to the differing educational and professional demands placed on resident physicians. Further study is needed better understand how tablet computers and other mobile devices may assist in medical education and patient care.

  3. Data center equipment location and monitoring system

    DEFF Research Database (Denmark)

    2011-01-01

    A data center equipment location system includes both hardware and software to provide for location, monitoring, security and identification of servers and other equipment in equipment racks. The system provides a wired alternative to the wireless RFID tag system by using electronic ID tags...... connected to each piece of equipment, each electronic ID tag connected directly by wires to a equipment rack controller on the equipment rack. The equipment rack controllers then link over a local area network to a central control computer. The central control computer provides an operator interface......, and runs a software application program that communicates with the equipment rack controllers. The software application program of the central control computer stores IDs of the equipment rack controllers and each of its connected electronic ID tags in a database.; The software application program...

  4. Digital computer structure and design

    CERN Document Server

    Townsend, R

    2014-01-01

    Digital Computer Structure and Design, Second Edition discusses switching theory, counters, sequential circuits, number representation, and arithmetic functions The book also describes computer memories, the processor, data flow system of the processor, the processor control system, and the input-output system. Switching theory, which is purely a mathematical concept, centers on the properties of interconnected networks of ""gates."" The theory deals with binary functions of 1 and 0 which can change instantaneously from one to the other without intermediate values. The binary number system is

  5. 78 FR 42080 - Privacy Act of 1974; CMS Computer Match No. 2013-07; HHS Computer Match No. 1303; DoD-DMDC Match...

    Science.gov (United States)

    2013-07-15

    ... 1974; CMS Computer Match No. 2013-07; HHS Computer Match No. 1303; DoD-DMDC Match No. 18 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS). ACTION... Act of 1974, as amended, this notice announces the establishment of a CMP that CMS plans to conduct...

  6. Space Flight Operations Center local area network

    Science.gov (United States)

    Goodman, Ross V.

    1988-01-01

    The existing Mission Control and Computer Center at JPL will be replaced by the Space Flight Operations Center (SFOC). One part of the SFOC is the LAN-based distribution system. The purpose of the LAN is to distribute the processed data among the various elements of the SFOC. The SFOC LAN will provide a robust subsystem that will support the Magellan launch configuration and future project adaptation. Its capabilities include (1) a proven cable medium as the backbone for the entire network; (2) hardware components that are reliable, varied, and follow OSI standards; (3) accurate and detailed documentation for fault isolation and future expansion; and (4) proven monitoring and maintenance tools.

  7. Portability and the National Energy Software Center

    International Nuclear Information System (INIS)

    Butler, M.K.

    1978-01-01

    The software portability problem is examined from the viewpoint of experience gained in the operation of a software exchange and information center. First, the factors contributing to the program interchange to date are identified, then major problem areas remaining are noted. The import of the development of programing language and documentation standards is noted, and the program packaging procedures and dissemination practices employed by the Center to facilitate successful software transport are described. Organization, or installation, dependencies of the computing environment, often hidden from the program author, and data interchange complexities are seen as today's primary issues, with dedicated processors and network communications offering an alternative solution

  8. PREFACE: IUPAP C20 Conference on Computational Physics (CCP 2011)

    Science.gov (United States)

    Troparevsky, Claudia; Stocks, George Malcolm

    2012-12-01

    Increasingly, computational physics stands alongside experiment and theory as an integral part of the modern approach to solving the great scientific challenges of the day on all scales - from cosmology and astrophysics, through climate science, to materials physics, and the fundamental structure of matter. Computational physics touches aspects of science and technology with direct relevance to our everyday lives, such as communication technologies and securing a clean and efficient energy future. This volume of Journal of Physics: Conference Series contains the proceedings of the scientific contributions presented at the 23rd Conference on Computational Physics held in Gatlinburg, Tennessee, USA, in November 2011. The annual Conferences on Computational Physics (CCP) are dedicated to presenting an overview of the most recent developments and opportunities in computational physics across a broad range of topical areas and from around the world. The CCP series has been in existence for more than 20 years, serving as a lively forum for computational physicists. The topics covered by this conference were: Materials/Condensed Matter Theory and Nanoscience, Strongly Correlated Systems and Quantum Phase Transitions, Quantum Chemistry and Atomic Physics, Quantum Chromodynamics, Astrophysics, Plasma Physics, Nuclear and High Energy Physics, Complex Systems: Chaos and Statistical Physics, Macroscopic Transport and Mesoscopic Methods, Biological Physics and Soft Materials, Supercomputing and Computational Physics Teaching, Computational Physics and Sustainable Energy. We would like to take this opportunity to thank our sponsors: International Union of Pure and Applied Physics (IUPAP), IUPAP Commission on Computational Physics (C20), American Physical Society Division of Computational Physics (APS-DCOMP), Oak Ridge National Laboratory (ORNL), Center for Defect Physics (CDP), the University of Tennessee (UT)/ORNL Joint Institute for Computational Sciences (JICS) and Cray, Inc

  9. Institute for Computational Mechanics in Propulsion (ICOMP)

    Science.gov (United States)

    Keith, Theo G., Jr. (Editor); Balog, Karen (Editor); Povinelli, Louis A. (Editor)

    2001-01-01

    The Institute for Computational Mechanics in Propulsion (ICOMP) was formed to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. ICOMP is operated by the Ohio Aerospace Institute (OAI) and funded via numerous cooperative agreements by the NASA Glenn Research Center in Cleveland, Ohio. This report describes the activities at ICOMP during 1999, the Institute's fourteenth year of operation.

  10. Planning and management of cloud computing networks

    Science.gov (United States)

    Larumbe, Federico

    The evolution of the Internet has a great impact on a big part of the population. People use it to communicate, query information, receive news, work, and as entertainment. Its extraordinary usefulness as a communication media made the number of applications and technological resources explode. However, that network expansion comes at the cost of an important power consumption. If the power consumption of telecommunication networks and data centers is considered as the power consumption of a country, it would rank at the 5 th place in the world. Furthermore, the number of servers in the world is expected to grow by a factor of 10 between 2013 and 2020. This context motivates us to study techniques and methods to allocate cloud computing resources in an optimal way with respect to cost, quality of service (QoS), power consumption, and environmental impact. The results we obtained from our test cases show that besides minimizing capital expenditures (CAPEX) and operational expenditures (OPEX), the response time can be reduced up to 6 times, power consumption by 30%, and CO2 emissions by a factor of 60. Cloud computing provides dynamic access to IT resources as a service. In this paradigm, programs are executed in servers connected to the Internet that users access from their computers and mobile devices. The first advantage of this architecture is to reduce the time of application deployment and interoperability, because a new user only needs a web browser and does not need to install software on local computers with specific operating systems. Second, applications and information are available from everywhere and with any device with an Internet access. Also, servers and IT resources can be dynamically allocated depending on the number of users and workload, a feature called elasticity. This thesis studies the resource management of cloud computing networks and is divided in three main stages. We start by analyzing the planning of cloud computing networks to get a

  11. Power Consumption Evaluation of Distributed Computing Network Considering Traffic Locality

    Science.gov (United States)

    Ogawa, Yukio; Hasegawa, Go; Murata, Masayuki

    When computing resources are consolidated in a few huge data centers, a massive amount of data is transferred to each data center over a wide area network (WAN). This results in increased power consumption in the WAN. A distributed computing network (DCN), such as a content delivery network, can reduce the traffic from/to the data center, thereby decreasing the power consumed in the WAN. In this paper, we focus on the energy-saving aspect of the DCN and evaluate its effectiveness, especially considering traffic locality, i.e., the amount of traffic related to the geographical vicinity. We first formulate the problem of optimizing the DCN power consumption and describe the DCN in detail. Then, numerical evaluations show that, when there is strong traffic locality and the router has ideal energy proportionality, the system's power consumption is reduced to about 50% of the power consumed in the case where a DCN is not used; moreover, this advantage becomes even larger (up to about 30%) when the data center is located farthest from the center of the network topology.

  12. CICART Center For Integrated Computation And Analysis Of Reconnection And Turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharjee, Amitava [Univ. of New Hampshire, Durham, NH (United States)

    2016-03-27

    CICART is a partnership between the University of New Hampshire (UNH) and Dartmouth College. CICART addresses two important science needs of the DoE: the basic understanding of magnetic reconnection and turbulence that strongly impacts the performance of fusion plasmas, and the development of new mathematical and computational tools that enable the modeling and control of these phenomena. The principal participants of CICART constitute an interdisciplinary group, drawn from the communities of applied mathematics, astrophysics, computational physics, fluid dynamics, and fusion physics. It is a main premise of CICART that fundamental aspects of magnetic reconnection and turbulence in fusion devices, smaller-scale laboratory experiments, and space and astrophysical plasmas can be viewed from a common perspective, and that progress in understanding in any of these interconnected fields is likely to lead to progress in others. The establishment of CICART has strongly impacted the education and research mission of a new Program in Integrated Applied Mathematics in the College of Engineering and Applied Sciences at UNH by enabling the recruitment of a tenure-track faculty member, supported equally by UNH and CICART, and the establishment of an IBM-UNH Computing Alliance. The proposed areas of research in magnetic reconnection and turbulence in astrophysical, space, and laboratory plasmas include the following topics: (A) Reconnection and secondary instabilities in large high-Lundquist-number plasmas, (B) Particle acceleration in the presence of multiple magnetic islands, (C) Gyrokinetic reconnection: comparison with fluid and particle-in-cell models, (D) Imbalanced turbulence, (E) Ion heating, and (F) Turbulence in laboratory (including fusion-relevant) experiments. These theoretical studies make active use of three high-performance computer simulation codes: (1) The Magnetic Reconnection Code, based on extended two-fluid (or Hall MHD) equations, in an Adaptive Mesh

  13. [Doctor-Patient Communication Training in Simulated Situations: Emotions and Perceptions of Simulated Patients during Patient-Centered Conversations].

    Science.gov (United States)

    Butollo, Maria Asisa; Holzinger, Anita; Wagner-Menghin, Michaela

    2018-04-13

    The use of simulated patients (SPs) for doctor-patient communication training has been established in medical curricula as an important didactic method. The study addresses the question, if patients' emotions and perceptions are represented adequately in patient-centered communication. 22 of 37 SPs of the Medical University of Vienna (12 women, 10 men) were asked openly about their feelings after having acted as an SP in a semi-structured interview, which employed the Critical Incident Technique. The interviews were recorded, transcribed, separated into situational analysis units und analyzed deductively; we used the evidence based qualities of patient-centered communication and the "Nationaler Kompetenzbasierter Lernzielkatalog Medizin" as a guideline. Out of 192 analysis units, 67 were evaluated as positive and 125 as negative. The SPs reported positive feelings, such as perceiving "stability and trust in relationships" (22%), perception of congruence (15%), acceptance (27%) and empathy (36%). As to negative feelings, SPs reported "perceiving instability" (18%), "incongruence" (11%), "lack of acceptance" (40%) and "lack of empathy" (30%). Additionally, 50% of SPs were positively affected when observing students' learning success. When SPs perceived patient-centered communication, they reported positive emotions. A lack of patient centeredness, on the contrary, provoked negative emotions. An empathic attitude, as well as a "lack of acceptance" with contrary effects had the strongest influence on the SPs' mental state. The reaction of SPs to patient centeredness is sufficiently authentic to reach learning objectives, however it is also affected by reactions of SPs to the learning success of students, which is irrelevant for the real-life doctor-patient interaction. SP reactions are affected by students' attitudes. Students should therefore be prepared well before interacting with SPs in a roleplay setting. While SPs' behavior is authentic in patient-centered

  14. Carbon Dioxide Information Analysis Center and World Data Center for Atmospheric Trace Gases Fiscal Year 1999 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Cushman, R.M.

    2000-03-31

    The Carbon Dioxide Information Analysis Center (CDIAC), which includes the World Data Center (WDC) for Atmospheric Trace Gases, is the primary global-change data and information analysis center of the Department of Energy (DOE). More than just an archive of data sets and publications, CDIAC has--since its inception in 1982--enhanced the value of its holdings through intensive quality assurance, documentation, and integration. Whereas many traditional data centers are discipline-based (for example, meteorology or oceanography), CDIAC's scope includes potentially anything and everything that would be of value to users concerned with the greenhouse effect and global climate change, including concentrations of carbon dioxide (CO{sub 2}) and other radiatively active gases in the atmosphere; the role of the terrestrial biosphere and the oceans in the biogeochemical cycles of greenhouse gases; emissions of CO{sub 2} and other trace gases to the atmosphere; long-term climate trends; the effects of elevated CO{sub 2} on vegetation; and the vulnerability of coastal areas to rising sea level. CDIAC is located within the Environmental Sciences Division (ESD) at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee. CDIAC is co-located with ESD researchers investigating global-change topics, such as the global carbon cycle and the effects of carbon dioxide on vegetation. CDIAC staff are also connected with current ORNL research on related topics, such as renewable energy and supercomputing technologies. CDIAC is supported by the Environmental Sciences Division (Jerry Elwood, Acting Director) of DOE's Office of Biological and Environmental Research. CDIAC's FY 1999 budget was 2.2M dollars. CDIAC represents the DOE in the multi-agency Global Change Data and Information System. Bobbi Parra, and Wanda Ferrell on an interim basis, is DOE's Program Manager with responsibility for CDIAC. CDIAC comprises three groups, Global Change Data, Computer Systems, and

  15. Blast forecasting guide for the Site 300 Meteorology Center

    International Nuclear Information System (INIS)

    Odell, B.N.; Pfeifer, H.E.; Arganbright, V.E.

    1978-01-01

    These step-by-step procedures enable an occasional operator to run the Site 300 Meteorological Center. The primary function of the Center is to determine the maximum weight of high explosives that can be fired at Site 300 under any given meteorological conditions. A secondary function is to supply weather data for other programs such as ARAC (Atmospheric Release Advisory Capability). Included in the primary function are radar and theodolite operations for balloon tracking; calculation of temperatures for various altitudes using Oakland weather obtained from a teletype; computer terminal operation to obtain wind directions, wind velocities, temperatures, and pressure at various altitudes; and methods to determine high-explosive weight limits for simple inversions and focus conditions using pressure-versus-altitude information obtained from the computer. General information is included such as names, telephone numbers, and addresses of maintenance personnel, additional sources of weather information, chart suppliers, balloons, spare parts, etc

  16. Blast forecasting guide for the Site 300 Meteorology Center

    Energy Technology Data Exchange (ETDEWEB)

    Odell, B.N.; Pfeifer, H.E.; Arganbright, V.E.

    1978-06-01

    These step-by-step procedures enable an occasional operator to run the Site 300 Meteorological Center. The primary function of the Center is to determine the maximum weight of high explosives that can be fired at Site 300 under any given meteorological conditions. A secondary function is to supply weather data for other programs such as ARAC (Atmospheric Release Advisory Capability). Included in the primary function are radar and theodolite operations for balloon tracking; calculation of temperatures for various altitudes using Oakland weather obtained from a teletype; computer terminal operation to obtain wind directions, wind velocities, temperatures, and pressure at various altitudes; and methods to determine high-explosive weight limits for simple inversions and focus conditions using pressure-versus-altitude information obtained from the computer. General information is included such as names, telephone numbers, and addresses of maintenance personnel, additional sources of weather information, chart suppliers, balloons, spare parts, etc.

  17. 78 FR 48169 - Privacy Act of 1974; CMS Computer Match No. 2013-02; HHS Computer Match No. 1306; DoD-DMDC Match...

    Science.gov (United States)

    2013-08-07

    ... 1974; CMS Computer Match No. 2013-02; HHS Computer Match No. 1306; DoD-DMDC Match No. 12 AGENCY: Department of Health and Human Services (HHS), Centers for Medicare & Medicaid Services (CMS). ACTION: Notice... of 1974, as amended, this notice establishes a CMP that CMS plans to conduct with the Department of...

  18. Climate Prediction Center (CPC) Palmer Drought and Crop Moisture Indices

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Climate Prediction Center (CPC) Palmer Drought Severity and Crop Moisture Indices are computed for the 344 U.S. Climate Divisions on a weekly basis based on a...

  19. [Geometry, analysis, and computation in mathematics and applied science]. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, D.

    1994-02-01

    The principal investigators` work on a variety of pure and applied problems in Differential Geometry, Calculus of Variations and Mathematical Physics has been done in a computational laboratory and been based on interactive scientific computer graphics and high speed computation created by the principal investigators to study geometric interface problems in the physical sciences. We have developed software to simulate various physical phenomena from constrained plasma flow to the electron microscope imaging of the microstructure of compound materials, techniques for the visualization of geometric structures that has been used to make significant breakthroughs in the global theory of minimal surfaces, and graphics tools to study evolution processes, such as flow by mean curvature, while simultaneously developing the mathematical foundation of the subject. An increasingly important activity of the laboratory is to extend this environment in order to support and enhance scientific collaboration with researchers at other locations. Toward this end, the Center developed the GANGVideo distributed video software system and software methods for running lab-developed programs simultaneously on remote and local machines. Further, the Center operates a broadcast video network, running in parallel with the Center`s data networks, over which researchers can access stored video materials or view ongoing computations. The graphical front-end to GANGVideo can be used to make ``multi-media mail`` from both ``live`` computing sessions and stored materials without video editing. Currently, videotape is used as the delivery medium, but GANGVideo is compatible with future ``all-digital`` distribution systems. Thus as a byproduct of mathematical research, we are developing methods for scientific communication. But, most important, our research focuses on important scientific problems; the parallel development of computational and graphical tools is driven by scientific needs.

  20. Top scientific research center deploys Zambeel Aztera (TM) network storage system in high performance environment

    CERN Multimedia

    2002-01-01

    " The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory has implemented a Zambeel Aztera storage system and software to accelerate the productivity of scientists running high performance scientific simulations and computations" (1 page).

  1. Green Computing: Need of the Hour

    Science.gov (United States)

    Jena, Rabindra Ku

    Environmental and energy conservation issues have taken center stage in the global business arena in recent years. The reality of rising energy costs and their impact on international affairs coupled with the increased concern over the global warming climate crisis and other environmental issues have shifted the social and economic consciousness of the business community. This paper discusses the green computing needs and also studies the participation of different stockholders for implementation of green computing concepts in India.

  2. Computer models for kinetic equations of magnetically confined plasmas

    International Nuclear Information System (INIS)

    Killeen, J.; Kerbel, G.D.; McCoy, M.G.; Mirin, A.A.; Horowitz, E.J.; Shumaker, D.E.

    1987-01-01

    This paper presents four working computer models developed by the computational physics group of the National Magnetic Fusion Energy Computer Center. All of the models employ a kinetic description of plasma species. Three of the models are collisional, i.e., they include the solution of the Fokker-Planck equation in velocity space. The fourth model is collisionless and treats the plasma ions by a fully three-dimensional particle-in-cell method

  3. Chest X ray effective doses estimation in computed radiography

    International Nuclear Information System (INIS)

    Abdalla, Esra Abdalrhman Dfaalla

    2013-06-01

    Conventional chest radiography is technically difficult because of wide in tissue attenuations in the chest and limitations of screen-film systems. Computed radiography (CR) offers a different approach utilizing a photostimulable phosphor. photostimulable phosphors overcome some image quality limitations of chest imaging. The objective of this study was to estimate the effective dose in computed radiography at three hospitals in Khartoum. This study has been conducted in radiography departments in three centres Advanced Diagnostic Center, Nilain Diagnostic Center, Modern Diagnostic Center. The entrance surface dose (ESD) measurement was conducted for quality control of x-ray machines and survey of operators experimental techniques. The ESDs were measured by UNFORS dosimeter and mathematical equations to estimate patient doses during chest X rays. A total of 120 patients were examined in three centres, among them 62 were males and 58 were females. The overall mean and range of patient dosed was 0.073±0.037 (0.014-0.16) mGy per procedure while the effective dose was 3.4±01.7 (0.6-7.0) mSv per procedure. This study compared radiation doses to patients radiographic examinations of chest using computed radiology. The radiation dose was measured in three centres in Khartoum- Sudan. The results of the measured effective dose showed that the dose in chest radiography was lower in computed radiography compared to previous studies.(Author)

  4. Report on Computing and Networking in the Space Science Laboratory by the SSL Computer Committee

    Science.gov (United States)

    Gallagher, D. L. (Editor)

    1993-01-01

    The Space Science Laboratory (SSL) at Marshall Space Flight Center is a multiprogram facility. Scientific research is conducted in four discipline areas: earth science and applications, solar-terrestrial physics, astrophysics, and microgravity science and applications. Representatives from each of these discipline areas participate in a Laboratory computer requirements committee, which developed this document. The purpose is to establish and discuss Laboratory objectives for computing and networking in support of science. The purpose is also to lay the foundation for a collective, multiprogram approach to providing these services. Special recognition is given to the importance of the national and international efforts of our research communities toward the development of interoperable, network-based computer applications.

  5. Research on Using the Naturally Cold Air and the Snow for Data Center Air-conditioning, and Humidity Control

    Science.gov (United States)

    Tsuda, Kunikazu; Tano, Shunichi; Ichino, Junko

    To lower power consumption has becomes a worldwide concern. It is also becoming a bigger area in Computer Systems, such as reflected by the growing use of software-as-a-service and cloud computing whose market has increased since 2000, at the same time, the number of data centers that accumulates and manages the computer has increased rapidly. Power consumption at data centers is accounts for a big share of the entire IT power usage, and is still rapidly increasing. This research focuses on the air-conditioning that occupies accounts for the biggest portion of electric power consumption by data centers, and proposes to develop a technique to lower the power consumption by applying the natural cool air and the snow for control temperature and humidity. We verify those effectiveness of this approach by the experiment. Furthermore, we also examine the extent to which energy reduction is possible when a data center is located in Hokkaido.

  6. [On the astrology and computation in the 12th century: new unpublished manuscripts ].

    Science.gov (United States)

    Caiazzo, Irene

    2012-01-01

    This article examines three so far unknown lemmatic commentaries on computus and on astrolabe topics, which are to be found in MS Stuttgart, Württembergische Landesbibliothek, Cod. math. 4 degrees 33 (second half of the twelfth-century). The commentaries are on the 'Compotus' by Gerlandus, on the 'De mensura astrolabii' by Hermann of Reichenau, and on the 'De utilitatibus astrolabii', which is sometimes attributed to Gerbert of Aurillac. No commentaries on the respective treatises have previously been identified as such. The commentaries of the Stuttgart manuscript are of special interest in that they allow us to understand how a twelfth-century scholar read works on computus and the astrolabe, namely works that date back to the eleventh century. Their author remains anonymous, but in all probability he wrote his commentary on the 'Compotus' by Gerlandus either in 1143 or in 1150. An appendix to the article includes transcriptions of the introductory texts on the computus and on the astrolabe as well as the beginnings of the commentaries.

  7. a Recursive Approach to Compute Normal Forms

    Science.gov (United States)

    HSU, L.; MIN, L. J.; FAVRETTO, L.

    2001-06-01

    Normal forms are instrumental in the analysis of dynamical systems described by ordinary differential equations, particularly when singularities close to a bifurcation are to be characterized. However, the computation of a normal form up to an arbitrary order is numerically hard. This paper focuses on the computer programming of some recursive formulas developed earlier to compute higher order normal forms. A computer program to reduce the system to its normal form on a center manifold is developed using the Maple symbolic language. However, it should be stressed that the program relies essentially on recursive numerical computations, while symbolic calculations are used only for minor tasks. Some strategies are proposed to save computation time. Examples are presented to illustrate the application of the program to obtain high order normalization or to handle systems with large dimension.

  8. The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency

    Science.gov (United States)

    Oder, Karl; Pittman, Stephanie

    2015-01-01

    Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…

  9. An Ontology-Based Architecture for Adaptive Work-Centered User Interface Technology

    National Research Council Canada - National Science Library

    Aragones, Amy; Bruno, Jeanette; Crapo, Andrew; Garbiras, Marc

    2005-01-01

    .... The first concept is to use an ontology modeling approach to characterize a work domain in terms of "work-centered" activities as well as the computation mechanisms that achieve an implementation...

  10. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    Science.gov (United States)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte Carlo Simulation in SCEAPI and have been providing CPU power since fall 2015.

  11. Production Management System for AMS Computing Centres

    Science.gov (United States)

    Choutko, V.; Demakov, O.; Egorov, A.; Eline, A.; Shan, B. S.; Shi, R.

    2017-10-01

    The Alpha Magnetic Spectrometer [1] (AMS) has collected over 95 billion cosmic ray events since it was installed on the International Space Station (ISS) on May 19, 2011. To cope with enormous flux of events, AMS uses 12 computing centers in Europe, Asia and North America, which have different hardware and software configurations. The centers are participating in data reconstruction, Monte-Carlo (MC) simulation [2]/Data and MC production/as well as in physics analysis. Data production management system has been developed to facilitate data and MC production tasks in AMS computing centers, including job acquiring, submitting, monitoring, transferring, and accounting. It was designed to be modularized, light-weighted, and easy-to-be-deployed. The system is based on Deterministic Finite Automaton [3] model, and implemented by script languages, Python and Perl, and the built-in sqlite3 database on Linux operating systems. Different batch management systems, file system storage, and transferring protocols are supported. The details of the integration with Open Science Grid are presented as well.

  12. Mean centering, multicollinearity, and moderators in multiple regression: The reconciliation redux.

    Science.gov (United States)

    Iacobucci, Dawn; Schneider, Matthew J; Popovich, Deidre L; Bakamitsos, Georgios A

    2017-02-01

    In this article, we attempt to clarify our statements regarding the effects of mean centering. In a multiple regression with predictors A, B, and A × B (where A × B serves as an interaction term), mean centering A and B prior to computing the product term can clarify the regression coefficients (which is good) and the overall model fit R 2 will remain undisturbed (which is also good).

  13. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  14. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  15. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  16. Lightness : a function-virtualizable software defined data center network with all-optical circuit/packet switching

    NARCIS (Netherlands)

    Saridis, G.; Peng, S.; Yan, Y.; Aguado, A.; Guo, B.; Arslan, M.; Jackson, C.; Miao, W.; Calabretta, N.; Agraz, F.; Spadaro, S.; Bernini, G.; Ciulli, N.; Zervas, G.; Nejabati, R.; Simeonidou, D.

    2016-01-01

    Modern high-performance Data Centers are responsible for delivering a huge variety of cloud applications to the end-users, which are increasingly pushing the limits of currently deployed computing and network infrastructure. All-optical dynamic data center network (DCN) architectures are strong

  17. Python for Scientific Computing Education: Modeling of Queueing Systems

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2014-01-01

    Full Text Available In this paper, we present the methodology for the introduction to scientific computing based on model-centered learning. We propose multiphase queueing systems as a basis for learning objects. We use Python and parallel programming for implementing the models and present the computer code and results of stochastic simulations.

  18. A source-controlled data center network model.

    Science.gov (United States)

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS.

  19. A source-controlled data center network model

    Science.gov (United States)

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS. PMID:28328925

  20. Institute for Computational Mechanics in Propulsion (ICOMP). 10

    Science.gov (United States)

    Keith, Theo G., Jr. (Editor); Balog, Karen (Editor); Povinelli, Louis A. (Editor)

    1996-01-01

    The Institute for Computational Mechanics in Propulsion (ICOMP) is operated by the Ohio Aerospace Institute (OAI) and funded under a cooperative agreement by the NASA Lewis Research Center in Cleveland, Ohio. The purpose of ICOMP is to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. This report describes the activities at ICOUP during 1995.

  1. BaBar computing - From collisions to physics results

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    The BaBar experiment at SLAC studies B-physics at the Upsilon(4S) resonance using the high-luminosity e+e- collider PEP-II at the Stanford Linear Accelerator Center (SLAC). Taking, processing and analyzing the very large data samples is a significant computing challenge. This presentation will describe the entire BaBar computing chain and illustrate the solutions chosen as well as their evolution with the ever higher luminosity being delivered by PEP-II. This will include data acquisition and software triggering in a high availability, low-deadtime online environment, a prompt, automated calibration pass through the data SLAC and then the full reconstruction of the data that takes place at INFN-Padova within 24 hours. Monte Carlo production takes place in a highly automated fashion in 25+ sites. The resulting real and simulated data is distributed and made available at SLAC and other computing centers. For analysis a much more sophisticated skimming pass has been introduced in the past year, ...

  2. Montessori Transformation at Computer Associates.

    Science.gov (United States)

    Mars, Lisa

    2002-01-01

    Describes the growth of the all-day Montessori program for children ages 6 weeks to 6 years at Computer Associates' corporate headquarters and multiple sites worldwide. Focuses on placement of AMI Montessori-trained teachers, refurbishing of the child development centers to fit Montessori specifications, and the Nido--the children's community--and…

  3. Center for modeling of turbulence and transition: Research briefs, 1995

    Science.gov (United States)

    1995-10-01

    This research brief contains the progress reports of the research staff of the Center for Modeling of Turbulence and Transition (CMOTT) from July 1993 to July 1995. It also constitutes a progress report to the Institute of Computational Mechanics in Propulsion located at the Ohio Aerospace Institute and the Lewis Research Center. CMOTT has been in existence for about four years. In the first three years, its main activities were to develop and validate turbulence and combustion models for propulsion systems, in an effort to remove the deficiencies of existing models. Three workshops on computational turbulence modeling were held at LeRC (1991, 1993, 1994). At present, CMOTT is integrating the CMOTT developed/improved models into CFD tools which can be used by the propulsion systems community. This activity has resulted in an increased collaboration with the Lewis CFD researchers.

  4. Proceedings of the meeting on large scale computer simulation research

    International Nuclear Information System (INIS)

    2004-04-01

    The meeting to summarize the collaboration activities for FY2003 on the Large Scale Computer Simulation Research was held January 15-16, 2004 at Theory and Computer Simulation Research Center, National Institute for Fusion Science. Recent simulation results, methodologies and other related topics were presented. (author)

  5. Generalized bibliographic format as used by the Ecological Sciences Information Center

    International Nuclear Information System (INIS)

    Allison, L.J.; Pfuderer, H.A.; Collier, B.N.

    1979-03-01

    The purpose of this document is to provide guidance for the preparation of computer input for the information programs being developed by the Ecological Sciences Information Center (ESIC)/Information Center Complex (ICC) of the Oak Ridge National Laboratory (ORNL). Through the use of a generalized system, the data of all the centers of ICC are compatible. Literature included in an information data base has a number of identifying characteristics. Each of these characteristics or data fields can be recognized and searched by the computer. The information for each field must have an alphanumeric label or field descriptor. All of the labels presently used are sets of upper-case letters approximating the name of the field they represent. Presently, there are 69 identified fields; additional fields may be included in the future. The format defined here is designed to facilitate the input of information to the ADSEP program. This program processes data for the ORNL on-line (ORLOOK) search system and is a special case of the ADSEP text input option

  6. Generalized bibliographic format as used by the Ecological Sciences Information Center

    Energy Technology Data Exchange (ETDEWEB)

    Allison, L.J.; Pfuderer, H.A.; Collier, B.N.

    1979-03-01

    The purpose of this document is to provide guidance for the preparation of computer input for the information programs being developed by the Ecological Sciences Information Center (ESIC)/Information Center Complex (ICC) of the Oak Ridge National Laboratory (ORNL). Through the use of a generalized system, the data of all the centers of ICC are compatible. Literature included in an information data base has a number of identifying characteristics. Each of these characteristics or data fields can be recognized and searched by the computer. The information for each field must have an alphanumeric label or field descriptor. All of the labels presently used are sets of upper-case letters approximating the name of the field they represent. Presently, there are 69 identified fields; additional fields may be included in the future. The format defined here is designed to facilitate the input of information to the ADSEP program. This program processes data for the ORNL on-line (ORLOOK) search system and is a special case of the ADSEP text input option.

  7. The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers

    International Nuclear Information System (INIS)

    Bonacorsi, D; Bartolome, I Cabrillo; Matorras, F; Gonzalez Caballero, I; Sartirana, A

    2010-01-01

    Approaching LHC data taking, the CMS experiment is deploying, commissioning and operating the building tools of its grid-based computing infrastructure. The commissioning program includes testing, deployment and operation of various storage solutions to support the computing workflows of the experiment. Recently, some of the Tier-1 and Tier-2 centers supporting the collaboration have started to deploy StoRM based storage systems. These are POSIX-based disk storage systems on top of which StoRM implements the Storage Resource Manager (SRM) version 2 interface allowing for a standard-based access from the Grid. In this notes we briefly describe the experience so far achieved at the CNAF Tier-1 center and at the IFCA Tier-2 center.

  8. Parallel neural pathways in higher visual centers of the Drosophila brain that mediate wavelength-specific behavior

    Directory of Open Access Journals (Sweden)

    Hideo eOtsuna

    2014-02-01

    Full Text Available Compared with connections between the retinae and primary visual centers, relatively less is known in both mammals and insects about the functional segregation of neural pathways connecting primary and higher centers of the visual processing cascade. Here, using the Drosophila visual system as a model, we demonstrate two levels of parallel computation in the pathways that connect primary visual centers of the optic lobe to computational circuits embedded within deeper centers in the central brain. We show that a seemingly simple achromatic behavior, namely phototaxis, is under the control of several independent pathways, each of which is responsible for navigation towards unique wavelengths. Silencing just one pathway is enough to disturb phototaxis towards one characteristic monochromatic source, whereas phototactic behavior towards white light is not affected. The response spectrum of each demonstrable pathway is different from that of individual photoreceptors, suggesting subtractive computations. A choice assay between two colors showed that these pathways are responsible for navigation towards, but not for the detection itself of, the monochromatic light. The present study provides novel insights about how visual information is separated and processed in parallel to achieve robust control of an innate behavior.

  9. Comparison of canal transportation and centering ability of rotary protaper, one shape system and wave one system using cone beam computed tomography: An in vitro study

    Science.gov (United States)

    Tambe, Varsha Harshal; Nagmode, Pradnya Sunil; Abraham, Sathish; Patait, Mahendra; Lahoti, Pratik Vinod; Jaju, Neha

    2014-01-01

    Aim: The aim of the present study was to compare the canal transportation and centering ability of Rotary ProTaper, One Shape and Wave One systems using cone beam computed tomography (CBCT) in curved root canals to find better instrumentation technique for maintaining root canal geometry. Materials and Methods: Total 30 freshly extracted premolars having curved root canals with at least 10 degrees of curvature were divided into three groups of 10 teeth each. All teeth were scanned by CBCT to determine the root canal shape before instrumentation. In Group 1, the canals were prepared with Rotary ProTaper files, in Group 2 the canals were prepared with One Shape files and in Group 3 canals were prepared with Wave One files. After preparation, post-instrumentation scan was performed. Pre-instrumentation and post-instrumentation images were obtained at three levels, 3 mm apical, 3 mm coronal and 8 mm apical above the apical foramen were compared using CBCT software. Amount of transportation and centering ability were assessed. The three groups were statistically compared with analysis of variance and Tukey honestly significant. Results: All instruments maintained the original canal curvature with significant differences between the different files. Data suggested that Wave One files presented the best outcomes for both the variables evaluated. Wave One files caused lesser transportation and remained better centered in the canal than One Shape and Rotary ProTaper files. Conclusion: The canal preparation with Wave One files showed lesser transportation and better centering ability than One Shape and ProTaper. PMID:25506145

  10. Mathematical modeling and computational intelligence in engineering applications

    CERN Document Server

    Silva Neto, Antônio José da; Silva, Geraldo Nunes

    2016-01-01

    This book brings together a rich selection of studies in mathematical modeling and computational intelligence, with application in several fields of engineering, like automation, biomedical, chemical, civil, electrical, electronic, geophysical and mechanical engineering, on a multidisciplinary approach. Authors from five countries and 16 different research centers contribute with their expertise in both the fundamentals and real problems applications based upon their strong background on modeling and computational intelligence. The reader will find a wide variety of applications, mathematical and computational tools and original results, all presented with rigorous mathematical procedures. This work is intended for use in graduate courses of engineering, applied mathematics and applied computation where tools as mathematical and computational modeling, numerical methods and computational intelligence are applied to the solution of real problems.

  11. NOAA/West coast and Alaska Tsunami warning center Atlantic Ocean response criteria

    Science.gov (United States)

    Whitmore, P.; Refidaff, C.; Caropolo, M.; Huerfano-Moreno, V.; Knight, W.; Sammler, W.; Sandrik, A.

    2009-01-01

    West Coast/Alaska Tsunami Warning Center (WCATWC) response criteria for earthquakesoccurring in the Atlantic and Caribbean basins are presented. Initial warning center decisions are based on an earthquake's location, magnitude, depth, distance from coastal locations, and precomputed threat estimates based on tsunami models computed from similar events. The new criteria will help limit the geographical extent of warnings and advisories to threatened regions, and complement the new operational tsunami product suite. Criteria are set for tsunamis generated by earthquakes, which are by far the main cause of tsunami generation (either directly through sea floor displacement or indirectly by triggering of sub-sea landslides).The new criteria require development of a threat data base which sets warning or advisory zones based on location, magnitude, and pre-computed tsunami models. The models determine coastal tsunami amplitudes based on likely tsunami source parameters for a given event. Based on the computed amplitude, warning and advisory zones are pre-set.

  12. Virtual Meteorological Center

    Directory of Open Access Journals (Sweden)

    Marius Brinzila

    2007-10-01

    Full Text Available A virtual meteorological center, computer based with Internet possibility transmission of the information is presented. Circumstance data is collected with logging field meteorological station. The station collects and automatically save data about the temperature in the air, relative humidity, pressure, wind speed and wind direction, rain gauge, solar radiation and air quality. Also can perform sensors test, analyze historical data and evaluate statistical information. The novelty of the system is that it can publish data over the Internet using LabVIEW Web Server capabilities and deliver a video signal to the School TV network. Also the system performs redundant measurement of temperature and humidity and was improved using new sensors and an original signal conditioning module.

  13. Human Computation An Integrated Approach to Learning from the Crowd

    CERN Document Server

    Law, Edith

    2011-01-01

    Human computation is a new and evolving research area that centers around harnessing human intelligence to solve computational problems that are beyond the scope of existing Artificial Intelligence (AI) algorithms. With the growth of the Web, human computation systems can now leverage the abilities of an unprecedented number of people via the Web to perform complex computation. There are various genres of human computation applications that exist today. Games with a purpose (e.g., the ESP Game) specifically target online gamers who generate useful data (e.g., image tags) while playing an enjoy

  14. Clinical utility of dental cone-beam computed tomography: current perspectives

    Directory of Open Access Journals (Sweden)

    Jaju PP

    2014-04-01

    Full Text Available Prashant P Jaju,1 Sushma P Jaju21Oral Medicine and Radiology, 2Conservative Dentistry and Endodontics, Rishiraj College of Dental Sciences and Research Center, Bhopal, IndiaAbstract: Panoramic radiography and computed tomography were the pillars of maxillofacial diagnosis. With the advent of cone-beam computed tomography, dental practice has seen a paradigm shift. This review article highlights the potential applications of cone-beam computed tomography in the fields of dental implantology and forensic dentistry, and its limitations in maxillofacial diagnosis.Keywords: dental implants, cone-beam computed tomography, panoramic radiography, computed tomography

  15. Transformation of topologically close-packed β-W to body-centered cubic α-W: Comparison of experiments and computations.

    Science.gov (United States)

    Barmak, Katayun; Liu, Jiaxing; Harlan, Liam; Xiao, Penghao; Duncan, Juliana; Henkelman, Graeme

    2017-10-21

    The enthalpy and activation energy for the transformation of the metastable form of tungsten, β-W, which has the topologically close-packed A15 structure (space group Pm3¯n), to equilibrium α-W, which is body-centered cubic (A2, space group Im3¯m), was measured using differential scanning calorimetry. The β-W films were 1 μm-thick and were prepared by sputter deposition in argon with a small amount of nitrogen. The transformation enthalpy was measured as -8.3 ± 0.4 kJ/mol (-86 ± 4 meV/atom) and the transformation activation energy as 2.2 ± 0.1 eV. The measured enthalpy was found to agree well with the difference in energies of α and β tungsten computed using density functional theory, which gave a value of -82 meV/atom for the transformation enthalpy. A calculated concerted transformation mechanism with a barrier of 0.4 eV/atom, in which all the atoms in an A15 unit cell transform into A2, was found to be inconsistent with the experimentally measured activation energy for any critical nucleus larger than two A2 unit cells. Larger calculations of eight A15 unit cells spontaneously relax to a mechanism in which part of the supercell first transforms from A15 to A2, creating a phase boundary, before the remaining A15 transforms into the A2 phase. Both calculations indicate that a nucleation and growth mechanism is favored over a concerted transformation. More consistent with the experimental activation energy was that of a calculated local transformation mechanism at the A15-A2 phase boundary, computed as 1.7 eV using molecular dynamics simulations. This calculated phase transformation mechanism involves collective rearrangements of W atoms in the disordered interface separating the A15 and A2 phases.

  16. Jackson State University's Center for Spatial Data Research and Applications: New facilities and new paradigms

    Science.gov (United States)

    Davis, Bruce E.; Elliot, Gregory

    1989-01-01

    Jackson State University recently established the Center for Spatial Data Research and Applications, a Geographical Information System (GIS) and remote sensing laboratory. Taking advantage of new technologies and new directions in the spatial (geographic) sciences, JSU is building a Center of Excellence in Spatial Data Management. New opportunities for research, applications, and employment are emerging. GIS requires fundamental shifts and new demands in traditional computer science and geographic training. The Center is not merely another computer lab but is one setting the pace in a new applied frontier. GIS and its associated technologies are discussed. The Center's facilities are described. An ARC/INFO GIS runs on a Vax mainframe, with numerous workstations. Image processing packages include ELAS, LIPS, VICAR, and ERDAS. A host of hardware and software peripheral are used in support. Numerous projects are underway, such as the construction of a Gulf of Mexico environmental data base, development of AI in image processing, a land use dynamics study of metropolitan Jackson, and others. A new academic interdisciplinary program in Spatial Data Management is under development, combining courses in Geography and Computer Science. The broad range of JSU's GIS and remote sensing activities is addressed. The impacts on changing paradigms in the university and in the professional world conclude the discussion.

  17. Carbon Dioxide Information Analysis Center and World Data Center for Atmospheric Trace Gases Fiscal Year 2001 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Cushman, R.M.

    2002-10-15

    The Carbon Dioxide Information Analysis Center (CDIAC), which includes the World Data Center (WDC) for Atmospheric Trace Gases, is the primary global change data and information analysis center of the U.S. Department of Energy (DOE). More than just an archive of data sets and publications, CDIAC has, since its inception in 1982, enhanced the value of its holdings through intensive quality assurance, documentation, and integration. Whereas many traditional data centers are discipline-based (for example, meteorology or oceanography), CDIAC's scope includes potentially anything and everything that would be of value to users concerned with the greenhouse effect and global climate change, including concentrations of carbon dioxide (CO{sub 2}) and other radiatively active gases in the atmosphere; the role of the terrestrial biosphere and the oceans in the biogeochemical cycles of greenhouse gases; emissions of CO{sub 2} and other trace gases to the atmosphere; long-term climate trends; the effects of elevated CO{sub 2} on vegetation; and the vulnerability of coastal areas to rising sea levels. CDIAC is located within the Environmental Sciences Division (ESD) at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee. CDIAC is co-located with ESD researchers investigating global-change topics, such as the global carbon cycle and the effects of carbon dioxide on climate and vegetation. CDIAC staff are also connected with current ORNL research on related topics, such as renewable energy and supercomputing technologies. CDIAC is supported by the Environmental Sciences Division (Jerry Elwood, Director) of DOE's Office of Biological and Environmental Research. CDIAC represents DOE in the multi-agency Global Change Data and Information System (GCDIS). Wanda Ferrell is DOE's Program Manager with overall responsibility for CDIAC. Roger Dahlman is responsible for CDIAC's AmeriFlux tasks, and Anna Palmisano for CDIAC's Ocean Data tasks. CDIAC is made

  18. Measurements and predictions of the air distribution systems in high compute density (Internet) data centers

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jinkyun [HIMEC (Hanil Mechanical Electrical Consultants) Ltd., Seoul 150-103 (Korea); Department of Architectural Engineering, Yonsei University, Seoul 120-749 (Korea); Lim, Taesub; Kim, Byungseon Sean [Department of Architectural Engineering, Yonsei University, Seoul 120-749 (Korea)

    2009-10-15

    When equipment power density increases, a critical goal of a data center cooling system is to separate the equipment exhaust air from the equipment intake air in order to prevent the IT server from overheating. Cooling systems for data centers are primarily differentiated according to the way they distribute air. The six combinations of flooded and locally ducted air distribution make up the vast majority of all installations, except fully ducted air distribution methods. Once the air distribution system (ADS) is selected, there are other elements that must be integrated into the system design. In this research, the design parameters and IT environmental aspects of the cooling system were studied with a high heat density data center. CFD simulation analysis was carried out in order to compare the heat removal efficiencies of various air distribution systems. The IT environment of an actual operating data center is measured to validate a model for predicting the effect of different air distribution systems. A method for planning and design of the appropriate air distribution system is described. IT professionals versed in precision air distribution mechanisms, components, and configurations can work more effectively with mechanical engineers to ensure the specification and design of optimized cooling solutions. (author)

  19. A Variable Service Broker Routing Policy for data center selection in cloud analyst

    Directory of Open Access Journals (Sweden)

    Ahmad M. Manasrah

    2017-07-01

    Full Text Available Cloud computing depends on sharing distributed computing resources to handle different services such as servers, storage and applications. The applications and infrastructures are provided as pay per use services through data center to the end user. The data centers are located at different geographic locations. However, these data centers can get overloaded with the increase number of client applications being serviced at the same time and location; this will degrade the overall QoS of the distributed services. Since different user applications may require different configuration and requirements, measuring the user applications performance of various resources is challenging. The service provider cannot make decisions for the right level of resources. Therefore, we propose a Variable Service Broker Routing Policy – VSBRP, which is a heuristic-based technique that aims to achieve minimum response time through considering the communication channel bandwidth, latency and the size of the job. The proposed service broker policy will also reduce the overloading of the data centers by redirecting the user requests to the next data center that yields better response and processing time. The simulation shows promising results in terms of response and processing time compared to other known broker policies from the literature.

  20. Examining the Computer Self-Efficacy Perceptions of Gifted Students

    Science.gov (United States)

    Kaplan, Abdullah; Öztürk, Mesut; Doruk, Muhammet; Yilmaz, Alper

    2013-01-01

    This study was conducted in order to determine the computer self-efficacy perceptions of gifted students. The research group of this study is composed of gifted students (N = 36) who were studying at the Science and Arts Center in Gümüshane province in the spring semester of the 2012-2013 academic year. The "Computer Self-Efficacy Perception…

  1. Rice diversity panels available through the genetic stocks oryza collection

    Science.gov (United States)

    The Genetic Stocks Oryza (GSOR) Collection was established in 2004 at the USDA-ARS, Dale Bumpers National Rice Research Center (DBNRRC) located in Stuttgart, AR. The mission of GSOR is to provide unique genetic resources to the rice research community for genetic and genomics related research. GSOR ...

  2. Coordinating Center: Molecular and Cellular Findings of Screen-Detected Lesions | Division of Cancer Prevention

    Science.gov (United States)

    The Molecular and Cellular Characterization of Screen‐Detected Lesions ‐ Coordinating Center and Data Management Group will provide support for the participating studies responding to RFA CA14‐10. The coordinating center supports three main domains: network coordination, statistical support and computational analysis and protocol development and database support. Support for

  3. Reference Architecture for Multi-Layer Software Defined Optical Data Center Networks

    Directory of Open Access Journals (Sweden)

    Casimer DeCusatis

    2015-09-01

    Full Text Available As cloud computing data centers grow larger and networking devices proliferate; many complex issues arise in the network management architecture. We propose a framework for multi-layer; multi-vendor optical network management using open standards-based software defined networking (SDN. Experimental results are demonstrated in a test bed consisting of three data centers interconnected by a 125 km metropolitan area network; running OpenStack with KVM and VMW are components. Use cases include inter-data center connectivity via a packet-optical metropolitan area network; intra-data center connectivity using an optical mesh network; and SDN coordination of networking equipment within and between multiple data centers. We create and demonstrate original software to implement virtual network slicing and affinity policy-as-a-service offerings. Enhancements to synchronous storage backup; cloud exchanges; and Fibre Channel over Ethernet topologies are also discussed.

  4. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  5. Projected Applications of a ``Climate in a Box'' Computing System at the NASA Short-term Prediction Research and Transition (SPoRT) Center

    Science.gov (United States)

    Jedlovec, G.; Molthan, A.; Zavodsky, B.; Case, J.; Lafontaine, F.

    2010-12-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center focuses on the transition of unique observations and research capabilities to the operational weather community, with a goal of improving short-term forecasts on a regional scale. Advances in research computing have lead to “Climate in a Box” systems, with hardware configurations capable of producing high resolution, near real-time weather forecasts, but with footprints, power, and cooling requirements that are comparable to desktop systems. The SPoRT Center has developed several capabilities for incorporating unique NASA research capabilities and observations with real-time weather forecasts. Planned utilization includes the development of a fully-cycled data assimilation system used to drive 36-48 hour forecasts produced by the NASA Unified version of the Weather Research and Forecasting (WRF) model (NU-WRF). The horsepower provided by the “Climate in a Box” system is expected to facilitate the assimilation of vertical profiles of temperature and moisture provided by the Atmospheric Infrared Sounder (AIRS) aboard the NASA Aqua satellite. In addition, the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard NASA’s Aqua and Terra satellites provide high-resolution sea surface temperatures and vegetation characteristics. The development of MODIS normalized difference vegetation index (NVDI) composites for use within the NASA Land Information System (LIS) will assist in the characterization of vegetation, and subsequently the surface albedo and processes related to soil moisture. Through application of satellite simulators, NASA satellite instruments can be used to examine forecast model errors in cloud cover and other characteristics. Through the aforementioned application of the “Climate in a Box” system and NU-WRF capabilities, an end goal is the establishment of a real-time forecast system that fully integrates modeling and analysis capabilities developed

  6. Projected Applications of a "Climate in a Box" Computing System at the NASA Short-Term Prediction Research and Transition (SPoRT) Center

    Science.gov (United States)

    Jedlovec, Gary J.; Molthan, Andrew L.; Zavodsky, Bradley; Case, Jonathan L.; LaFontaine, Frank J.

    2010-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center focuses on the transition of unique observations and research capabilities to the operational weather community, with a goal of improving short-term forecasts on a regional scale. Advances in research computing have lead to "Climate in a Box" systems, with hardware configurations capable of producing high resolution, near real-time weather forecasts, but with footprints, power, and cooling requirements that are comparable to desktop systems. The SPoRT Center has developed several capabilities for incorporating unique NASA research capabilities and observations with real-time weather forecasts. Planned utilization includes the development of a fully-cycled data assimilation system used to drive 36-48 hour forecasts produced by the NASA Unified version of the Weather Research and Forecasting (WRF) model (NU-WRF). The horsepower provided by the "Climate in a Box" system is expected to facilitate the assimilation of vertical profiles of temperature and moisture provided by the Atmospheric Infrared Sounder (AIRS) aboard the NASA Aqua satellite. In addition, the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard NASA s Aqua and Terra satellites provide high-resolution sea surface temperatures and vegetation characteristics. The development of MODIS normalized difference vegetation index (NVDI) composites for use within the NASA Land Information System (LIS) will assist in the characterization of vegetation, and subsequently the surface albedo and processes related to soil moisture. Through application of satellite simulators, NASA satellite instruments can be used to examine forecast model errors in cloud cover and other characteristics. Through the aforementioned application of the "Climate in a Box" system and NU-WRF capabilities, an end goal is the establishment of a real-time forecast system that fully integrates modeling and analysis capabilities developed within the NASA SPo

  7. Ambient radiation levels in positron emission tomography/computed tomography (PET/CT) imaging center

    Energy Technology Data Exchange (ETDEWEB)

    Santana, Priscila do Carmo; Oliveira, Paulo Marcio Campos de; Mamede, Marcelo; Silveira, Mariana de Castro; Aguiar, Polyanna; Real, Raphaela Vila, E-mail: pridili@gmail.com [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil); Silva, Teogenes Augusto da [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2015-01-15

    Objective: to evaluate the level of ambient radiation in a PET/CT center. Materials and methods: previously selected and calibrated TLD-100H thermoluminescent dosimeters were utilized to measure room radiation levels. During 32 days, the detectors were placed in several strategically selected points inside the PET/CT center and in adjacent buildings. After the exposure period the dosimeters were collected and processed to determine the radiation level. Results: in none of the points selected for measurements the values exceeded the radiation dose threshold for controlled area (5 mSv/ year) or free area (0.5 mSv/year) as recommended by the Brazilian regulations. Conclusion: in the present study the authors demonstrated that the whole shielding system is appropriate and, consequently, the workers are exposed to doses below the threshold established by Brazilian standards, provided the radiation protection standards are followed. (author)

  8. Computing for Lattice QCD: new developments from the APE experiment

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R [INFN, Sezione di Roma Tor Vergata, Roma (Italy); Biagioni, A; De Luca, S [INFN, Sezione di Roma, Roma (Italy)

    2008-06-15

    As the Lattice QCD develops improved techniques to shed light on new physics, it demands increasing computing power. The aim of the current APE (Array Processor Experiment) project is to provide the reference computing platform to the Lattice QCD community for the period 2009-2011. We present the project proposal for a peta flops range super-computing center with high performance and low maintenance costs, to be delivered starting from 2010.

  9. Computing for Lattice QCD: new developments from the APE experiment

    International Nuclear Information System (INIS)

    Ammendola, R.; Biagioni, A.; De Luca, S.

    2008-01-01

    As the Lattice QCD develops improved techniques to shed light on new physics, it demands increasing computing power. The aim of the current APE (Array Processor Experiment) project is to provide the reference computing platform to the Lattice QCD community for the period 2009-2011. We present the project proposal for a peta flops range super-computing center with high performance and low maintenance costs, to be delivered starting from 2010.

  10. Teaching introductory computer security at a Department of Defense university

    OpenAIRE

    Irvine, Cynthia E.

    1997-01-01

    The Naval Postgraduate School Center for Information Systems Security (INFOSEC) Studies and Research (NPS CISR) has developed an instructional program in computer security. Its objective is to insure that students not only understand practical aspects of computer security associated with current technology, but also learn the fundamental principles that can be applied to the development of systems for which high confidence in policy enforcement can be achieved. Introduction to Computer Sec...

  11. The Development of a Robot-Based Learning Companion: A User-Centered Design Approach

    Science.gov (United States)

    Hsieh, Yi-Zeng; Su, Mu-Chun; Chen, Sherry Y.; Chen, Gow-Dong

    2015-01-01

    A computer-vision-based method is widely employed to support the development of a variety of applications. In this vein, this study uses a computer-vision-based method to develop a playful learning system, which is a robot-based learning companion named RobotTell. Unlike existing playful learning systems, a user-centered design (UCD) approach is…

  12. Software-defined optical network for metro-scale geographically distributed data centers.

    Science.gov (United States)

    Samadi, Payman; Wen, Ke; Xu, Junjie; Bergman, Keren

    2016-05-30

    The emergence of cloud computing and big data has rapidly increased the deployment of small and mid-sized data centers. Enterprises and cloud providers require an agile network among these data centers to empower application reliability and flexible scalability. We present a software-defined inter data center network to enable on-demand scale out of data centers on a metro-scale optical network. The architecture consists of a combined space/wavelength switching platform and a Software-Defined Networking (SDN) control plane equipped with a wavelength and routing assignment module. It enables establishing transparent and bandwidth-selective connections from L2/L3 switches, on-demand. The architecture is evaluated in a testbed consisting of 3 data centers, 5-25 km apart. We successfully demonstrated end-to-end bulk data transfer and Virtual Machine (VM) migrations across data centers with less than 100 ms connection setup time and close to full link capacity utilization.

  13. Bringing Computational Thinking into the High School Science and Math Classroom

    Science.gov (United States)

    Trouille, Laura; Beheshti, E.; Horn, M.; Jona, K.; Kalogera, V.; Weintrop, D.; Wilensky, U.; University CT-STEM Project, Northwestern; University CenterTalent Development, Northwestern

    2013-01-01

    Computational thinking (for example, the thought processes involved in developing algorithmic solutions to problems that can then be automated for computation) has revolutionized the way we do science. The Next Generation Science Standards require that teachers support their students’ development of computational thinking and computational modeling skills. As a result, there is a very high demand among teachers for quality materials. Astronomy provides an abundance of opportunities to support student development of computational thinking skills. Our group has taken advantage of this to create a series of astronomy-based computational thinking lesson plans for use in typical physics, astronomy, and math high school classrooms. This project is funded by the NSF Computing Education for the 21st Century grant and is jointly led by Northwestern University’s Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA), the Computer Science department, the Learning Sciences department, and the Office of STEM Education Partnerships (OSEP). I will also briefly present the online ‘Astro Adventures’ courses for middle and high school students I have developed through NU’s Center for Talent Development. The online courses take advantage of many of the amazing online astronomy enrichment materials available to the public, including a range of hands-on activities and the ability to take images with the Global Telescope Network. The course culminates with an independent computational research project.

  14. Center for Space Transportation and Applied Research Fifth Annual Technical Symposium Proceedings

    Science.gov (United States)

    1993-01-01

    This Fifth Annual Technical Symposium, sponsored by the UT-Calspan Center for Space Transportation and Applied Research (CSTAR), is organized to provide an overview of the technical accomplishments of the Center's five Research and Technology focus areas during the past year. These areas include chemical propulsion, electric propulsion, commerical space transportation, computational methods, and laser materials processing. Papers in the area of artificial intelligence/expert systems are also presented.

  15. ASTEC: Controls analysis for personal computers

    Science.gov (United States)

    Downing, John P.; Bauer, Frank H.; Thorpe, Christopher J.

    1989-01-01

    The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. The project is a follow-on to the INCA (INteractive Controls Analysis) program that has been developed at GSFC over the past five years. While ASTEC makes use of the algorithms and expertise developed for the INCA program, the user interface was redesigned to take advantage of the capabilities of the personal computer. The design philosophy and the current capabilities of the ASTEC software are described.

  16. Monitoring and optimization of ATLAS Tier 2 center GoeGrid

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219638; Quadt, Arnulf; Yahyapour, Ramin

    The demand on computational and storage resources is growing along with the amount of information that needs to be processed and preserved. In order to ease the provisioning of the digital services to the growing number of consumers, more and more distributed computing systems and platforms are actively developed and employed. The building block of the distributed computing infrastructure are single computing centers, similar to the Worldwide LHC Computing Grid, Tier 2 centre GoeGrid. The main motivation of this thesis was the optimization of GoeGrid performance by efficient monitoring. The goal has been achieved by means of the GoeGrid monitoring information analysis. The data analysis approach was based on the adaptive-network-based fuzzy inference system (ANFIS) and machine learning algorithm such as Linear Support Vector Machine (SVM). The main object of the research was the digital service, since availability, reliability and serviceability of the computing platform can be measured according to the const...

  17. Quantum computing with defects

    Science.gov (United States)

    Varley, Joel

    2011-03-01

    The development of a quantum computer is contingent upon the identification and design of systems for use as qubits, the basic units of quantum information. One of the most promising candidates consists of a defect in diamond known as the nitrogen-vacancy (NV-1) center, since it is an individually-addressable quantum system that can be initialized, manipulated, and measured with high fidelity at room temperature. While the success of the NV-1 stems from its nature as a localized ``deep-center'' point defect, no systematic effort has been made to identify other defects that might behave in a similar way. We provide guidelines for identifying other defect centers with similar properties. We present a list of physical criteria that these centers and their hosts should meet and explain how these requirements can be used in conjunction with electronic structure theory to intelligently sort through candidate systems. To elucidate these points, we compare electronic structure calculations of the NV-1 center in diamond with those of several deep centers in 4H silicon carbide (SiC). Using hybrid functionals, we report formation energies, configuration-coordinate diagrams, and defect-level diagrams to compare and contrast the properties of these defects. We find that the NC VSi - 1 center in SiC, a structural analog of the NV-1 center in diamond, may be a suitable center with very different optical transition energies. We also discuss how the proposed criteria can be translated into guidelines to discover NV analogs in other tetrahedrally coordinated materials. This work was performed in collaboration with J. R. Weber, W. F. Koehl, B. B. Buckley, A. Janotti, C. G. Van de Walle, and D. D. Awschalom. This work was supported by ARO, AFOSR, and NSF.

  18. Best Practice Guidelines for Computer Technology in the Montessori Early Childhood Classroom.

    Science.gov (United States)

    Montminy, Peter

    1999-01-01

    Presents a draft for a principle-centered position statement of a Montessori early childhood program in central Pennsylvania, on the pros and cons of computer use in a Montessori 3-6 classroom. Includes computer software rating form. (Author/KB)

  19. A qualitative study adopting a user-centered approach to design and validate a brain computer interface for cognitive rehabilitation for people with brain injury.

    Science.gov (United States)

    Martin, Suzanne; Armstrong, Elaine; Thomson, Eileen; Vargiu, Eloisa; Solà, Marc; Dauwalder, Stefan; Miralles, Felip; Daly Lynn, Jean

    2017-07-14

    Cognitive rehabilitation is established as a core intervention within rehabilitation programs following a traumatic brain injury (TBI). Digitally enabled assistive technologies offer opportunities for clinicians to increase remote access to rehabilitation supporting transition into home. Brain Computer Interface (BCI) systems can harness the residual abilities of individuals with limited function to gain control over computers through their brain waves. This paper presents an online cognitive rehabilitation application developed with therapists, to work remotely with people who have TBI, who will use BCI at home to engage in the therapy. A qualitative research study was completed with people who are community dwellers post brain injury (end users), and a cohort of therapists involved in cognitive rehabilitation. A user-centered approach over three phases in the development, design and feasibility testing of this cognitive rehabilitation application included two tasks (Find-a-Category and a Memory Card task). The therapist could remotely prescribe activity with different levels of difficulty. The service user had a home interface which would present the therapy activities. This novel work was achieved by an international consortium of academics, business partners and service users.

  20. Ambient radiation levels in positron emission tomography/computed tomography (PET/CT) imaging center

    Science.gov (United States)

    Santana, Priscila do Carmo; de Oliveira, Paulo Marcio Campos; Mamede, Marcelo; Silveira, Mariana de Castro; Aguiar, Polyanna; Real, Raphaela Vila; da Silva, Teógenes Augusto

    2015-01-01

    Objective To evaluate the level of ambient radiation in a PET/CT center. Materials and Methods Previously selected and calibrated TLD-100H thermoluminescent dosimeters were utilized to measure room radiation levels. During 32 days, the detectors were placed in several strategically selected points inside the PET/CT center and in adjacent buildings. After the exposure period the dosimeters were collected and processed to determine the radiation level. Results In none of the points selected for measurements the values exceeded the radiation dose threshold for controlled area (5 mSv/year) or free area (0.5 mSv/year) as recommended by the Brazilian regulations. Conclusion In the present study the authors demonstrated that the whole shielding system is appropriate and, consequently, the workers are exposed to doses below the threshold established by Brazilian standards, provided the radiation protection standards are followed. PMID:25798004

  1. Scientific and technical information output of the Langley Research Center

    Science.gov (United States)

    1984-01-01

    Scientific and technical information that the Langley Research Center produced during the calendar year 1983 is compiled. Included are citations for Formal Reports, Quick-Release Technical Memorandums, Contractor Reports, Journal Articles and other Publications, Meeting Presentations, Technical Talks, Computer Programs, Tech Briefs, and Patents.

  2. Cloud computing and digital media fundamentals, techniques, and applications

    CERN Document Server

    Li, Kuan-Ching; Shih, Timothy K

    2014-01-01

    Cloud Computing and Digital Media: Fundamentals, Techniques, and Applications presents the fundamentals of cloud and media infrastructure, novel technologies that integrate digital media with cloud computing, and real-world applications that exemplify the potential of cloud computing for next-generation digital media. It brings together technologies for media/data communication, elastic media/data storage, security, authentication, cross-network media/data fusion, interdevice media interaction/reaction, data centers, PaaS, SaaS, and more.The book covers resource optimization for multimedia clo

  3. The Student/Library Computer Science Collaborative

    Science.gov (United States)

    Hahn, Jim

    2015-01-01

    With funding from an Institute of Museum and Library Services demonstration grant, librarians of the Undergraduate Library at the University of Illinois at Urbana-Champaign partnered with students in computer science courses to design and build student-centered mobile apps. The grant work called for demonstration of student collaboration…

  4. "Hack" Is Not A Dirty Word--The Tenth Anniversary of Patron Access Microcomputer Centers in Libraries.

    Science.gov (United States)

    Dewey, Patrick R.

    1986-01-01

    The history of patron access microcomputers in libraries is described as carrying on a tradition that information and computer power should be shared. Questions that all types of libraries need to ask in planning microcomputer centers are considered and several model centers are described. (EM)

  5. Applications of Computer Technology in Complex Craniofacial Reconstruction

    Directory of Open Access Journals (Sweden)

    Kristopher M. Day, MD

    2018-03-01

    Conclusion:. Modern 3D technology allows the surgeon to better analyze complex craniofacial deformities, precisely plan surgical correction with computer simulation of results, customize osteotomies, plan distractions, and print 3DPCI, as needed. The use of advanced 3D computer technology can be applied safely and potentially improve aesthetic and functional outcomes after complex craniofacial reconstruction. These techniques warrant further study and may be reproducible in various centers of care.

  6. Guidelines for development of NASA (National Aeronautics and Space Administration) computer security training programs

    Science.gov (United States)

    Tompkins, F. G.

    1983-01-01

    The report presents guidance for the NASA Computer Security Program Manager and the NASA Center Computer Security Officials as they develop training requirements and implement computer security training programs. NASA audiences are categorized based on the computer security knowledge required to accomplish identified job functions. Training requirements, in terms of training subject areas, are presented for both computer security program management personnel and computer resource providers and users. Sources of computer security training are identified.

  7. Aspects of computer control from the human engineering standpoint

    International Nuclear Information System (INIS)

    Huang, T.V.

    1979-03-01

    A Computer Control System includes data acquisition, information display and output control signals. In order to design such a system effectively we must first determine the required operational mode: automatic control (closed loop), computer assisted (open loop), or hybrid control. The choice of operating mode will depend on the nature of the plant, the complexity of the operation, the funds available, and the technical expertise of the operating staff, among many other factors. Once the mode has been selected, consideration must be given to the method (man/machine interface) by which the operator interacts with this system. The human engineering factors are of prime importance to achieving high operating efficiency and very careful attention must be given to this aspect of the work, if full operator acceptance is to be achieved. This paper will discuss these topics and will draw on experience gained in setting up the computer control system in Main Control Center for Stanford University's Accelerator Center (a high energy physics research facility)

  8. Modeling Road Traffic Using Service Center

    Directory of Open Access Journals (Sweden)

    HARAGOS, I.-M.

    2012-05-01

    Full Text Available Transport systems have an essential role in modern society because they facilitate access to natural resources and they stimulate trade. Current studies aimed at improving transport networks by developing new methods for optimization. Because of the increase in the global number of cars, one of the most common problems facing the transport network is congestion. By creating traffic models and simulate them, we can avoid this problem and find appropriate solutions. In this paper we propose a new method for modeling traffic. This method considers road intersections as being service centers. A service center represents a set consisting of a queue followed by one or multiple servers. This model was used to simulate real situations in an urban traffic area. Based on this simulation, we have successfully determined the optimal functioning and we have computed the performance measures.

  9. Clinical utility of dental cone-beam computed tomography: current perspectives

    OpenAIRE

    Jaju, Prashant P; Jaju, Sushma P

    2014-01-01

    Prashant P Jaju,1 Sushma P Jaju21Oral Medicine and Radiology, 2Conservative Dentistry and Endodontics, Rishiraj College of Dental Sciences and Research Center, Bhopal, IndiaAbstract: Panoramic radiography and computed tomography were the pillars of maxillofacial diagnosis. With the advent of cone-beam computed tomography, dental practice has seen a paradigm shift. This review article highlights the potential applications of cone-beam computed tomography in the fields of dental implantology an...

  10. ATLAS Distributed Computing: Experience and Evolution

    CERN Document Server

    Nairz, A; The ATLAS collaboration

    2013-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25 fb-1 of data. The total volume of beam and simulated data products exceeds 100 PB distributed across more than 150 computing centers around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics program including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2014 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, e...

  11. Interactive design center.

    Energy Technology Data Exchange (ETDEWEB)

    Pomplun, Alan R. (Sandia National Laboratories, Livermore, CA)

    2005-07-01

    Sandia's advanced computing resources provide researchers, engineers and analysts with the ability to develop and render highly detailed large-scale models and simulations. To take full advantage of these multi-million data point visualizations, display systems with comparable pixel counts are needed. The Interactive Design Center (IDC) is a second generation visualization theater designed to meet this need. The main display integrates twenty-seven projectors in a 9-wide by 3-high array with a total display resolution of more than 35 million pixels. Six individual SmartBoard displays offer interactive capabilities that include on-screen annotation and touch panel control of the facility's display systems. This report details the design, implementation and operation of this innovative facility.

  12. Computed tomography demonstration of a hypothalamic metastasis

    International Nuclear Information System (INIS)

    Chakeres, D.W.

    1983-01-01

    This case report describes a patient who presented with panhypopituitarism secondary to hypothalamic metastasis. A primary hypothalamic abnormality was suggested by computed tomographic (CT) demonstration of a small enhancing circular mass centered within the hypothalamus. Sellar radiographs and cerebral angiography were normal. (orig.)

  13. Computed tomography demonstration of a hypothalamic metastasis

    Energy Technology Data Exchange (ETDEWEB)

    Chakeres, D.W.

    1983-05-01

    This case report describes a patient who presented with panhypopituitarism secondary to hypothalamic metastasis. A primary hypothalamic abnormality was suggested by computed tomographic (CT) demonstration of a small enhancing circular mass centered within the hypothalamus. Sellar radiographs and cerebral angiography were normal.

  14. Physics of the 1 Teraflop RIKEN-BNL-Columbia QCD project. Proceedings of RIKEN BNL Research Center workshop: Volume 13

    International Nuclear Information System (INIS)

    1998-01-01

    A workshop was held at the RIKEN-BNL Research Center on October 16, 1998, as part of the first anniversary celebration for the center. This meeting brought together the physicists from RIKEN-BNL, BNL and Columbia who are using the QCDSP (Quantum Chromodynamics on Digital Signal Processors) computer at the RIKEN-BNL Research Center for studies of QCD. Many of the talks in the workshop were devoted to domain wall fermions, a discretization of the continuum description of fermions which preserves the global symmetries of the continuum, even at finite lattice spacing. This formulation has been the subject of analytic investigation for some time and has reached the stage where large-scale simulations in QCD seem very promising. With the computational power available from the QCDSP computers, scientists are looking forward to an exciting time for numerical simulations of QCD

  15. Automated Library of the Future: Estrella Mountain Community College Center.

    Science.gov (United States)

    Community & Junior College Libraries, 1991

    1991-01-01

    Describes plans for the Integrated High Technology Library (IHTL) at the Maricopa County Community College District's new Estrella Mountain campus, covering collaborative planning, the IHTL's design, and guidelines for the new center and campus (e.g., establishing computing/information-access across the curriculum; developing lifelong learners;…

  16. A User-Centered Cooperative Information System for Medical Imaging Diagnosis.

    Science.gov (United States)

    Gomez, Enrique J.; Quiles, Jose A.; Sanz, Marcos F.; del Pozo, Francisco

    1998-01-01

    Presents a cooperative information system for remote medical imaging diagnosis. General computer-supported cooperative work (CSCW) problems addressed are definition of a procedure for the design of user-centered cooperative systems (conceptual level); and improvement of user feedback and optimization of the communication bandwidth in highly…

  17. Computation for LHC experiments: a worldwide computing grid; Le calcul scientifique des experiences LHC: une grille de production mondiale

    Energy Technology Data Exchange (ETDEWEB)

    Fairouz, Malek [Universite Joseph-Fourier, LPSC, CNRS-IN2P3, Grenoble I, 38 (France)

    2010-08-15

    In normal operating conditions the LHC detectors are expected to record about 10{sup 10} collisions each year. The processing of all the consequent experimental data is a real computing challenge in terms of equipment, software and organization: it requires sustaining data flows of a few 10{sup 9} octets per second and recording capacity of a few tens of 10{sup 15} octets each year. In order to meet this challenge a computing network implying the dispatch and share of tasks, has been set. The W-LCG grid (World wide LHC computing grid) is made up of 4 tiers. Tiers 0 is the computer center in CERN, it is responsible for collecting and recording the raw data from the LHC detectors and to dispatch it to the 11 tiers 1. The tiers 1 is typically a national center, it is responsible for making a copy of the raw data and for processing it in order to recover relevant data with a physical meaning and to transfer the results to the 150 tiers 2. The tiers 2 is at the level of the Institute or laboratory, it is in charge of the final analysis of the data and of the production of the simulations. Tiers 3 are at the level of the laboratories, they provide a complementary and local resource to tiers 2 in terms of data analysis. (A.C.)

  18. 76 FR 50460 - Privacy Act of 1974; Notice of a Computer Matching Program

    Science.gov (United States)

    2011-08-15

    ... records will be disclosed for the purpose of this computer match are as follows: OPM will use the system... entitled to health care under TRS and TRR.'' E. Description of Computer Matching Program: Under the terms...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, Department of Defense (DoD...

  19. A User-Centered Mobile Cloud Computing Platform for Improving Knowledge Management in Small-to-Medium Enterprises in the Chilean Construction Industry

    Directory of Open Access Journals (Sweden)

    Daniela Núñez

    2018-03-01

    Full Text Available Knowledge management (KM is a key element for the development of small-to-medium enterprises (SMEs in the construction industry. This is particularly relevant in Chile, where this industry is composed almost entirely of SMEs. Although various KM system proposals can be found in the literature, they are not suitable for SMEs, due to usability problems, budget constraints, and time and connectivity issues. Mobile Cloud Computing (MCC systems offer several advantages to construction SMEs, but they have not yet been exploited to address KM needs. Therefore, this research is aimed at the development of a MCC-based KM platform to manage lessons learned in different construction projects of SMEs, through an iterative and user-centered methodology. Usability and quality evaluations of the proposed platform show that MCC is a feasible and attractive option to address the KM issues in SMEs of the Chilean construction industry, since it is possible to consider both technical and usability requirements.

  20. Computer classes and games in virtual reality environment to reduce loneliness among students of an elderly reference center: Study protocol for a randomised cross-over design.

    Science.gov (United States)

    Antunes, Thaiany Pedrozo Campos; Oliveira, Acary Souza Bulle de; Crocetta, Tania Brusque; Antão, Jennifer Yohanna Ferreira de Lima; Barbosa, Renata Thais de Almeida; Guarnieri, Regiani; Massetti, Thais; Monteiro, Carlos Bandeira de Mello; Abreu, Luiz Carlos de

    2017-03-01

    Physical and mental changes associated with aging commonly lead to a decrease in communication capacity, reducing social interactions and increasing loneliness. Computer classes for older adults make significant contributions to social and cognitive aspects of aging. Games in a virtual reality (VR) environment stimulate the practice of communicative and cognitive skills and might also bring benefits to older adults. Furthermore, it might help to initiate their contact to the modern technology. The purpose of this study protocol is to evaluate the effects of practicing VR games during computer classes on the level of loneliness of students of an elderly reference center. This study will be a prospective longitudinal study with a randomised cross-over design, with subjects aged 50 years and older, of both genders, spontaneously enrolled in computer classes for beginners. Data collection will be done in 3 moments: moment 0 (T0) - at baseline; moment 1 (T1) - after 8 typical computer classes; and moment 2 (T2) - after 8 computer classes which include 15 minutes for practicing games in VR environment. A characterization questionnaire, the short version of the Short Social and Emotional Loneliness Scale for Adults (SELSA-S) and 3 games with VR (Random, MoviLetrando, and Reaction Time) will be used. For the intervention phase 4 other games will be used: Coincident Timing, Motor Skill Analyser, Labyrinth, and Fitts. The statistical analysis will compare the evolution in loneliness perception, performance, and reaction time during the practice of the games between the 3 moments of data collection. Performance and reaction time during the practice of the games will also be correlated to the loneliness perception. The protocol is approved by the host institution's ethics committee under the number 52305215.3.0000.0082. Results will be disseminated via peer-reviewed journal articles and conferences. This clinical trial is registered at ClinicalTrials.gov identifier: NCT

  1. The Impact of Wireless Technology on Order Selection Audits at an Auto Parts Distribution Center

    Science.gov (United States)

    Goomas, David T.

    2012-01-01

    Audits of store order pallets or totes performed by auditors at five distribution centers (two experimental and three comparison distribution centers) were used to check for picking accuracy prior to being loaded onto a truck for store delivery. Replacing the paper audits with wireless handheld computers that included immediate auditory and visual…

  2. Simulating Shopper Behavior using Fuzzy Logic in Shopping Center Simulation

    Directory of Open Access Journals (Sweden)

    Jason Christian

    2016-12-01

    Full Text Available To simulate real-world phenomena, a computer tool can be used to run a simulation and provide a detailed report. By using a computer-aided simulation tool, we can retrieve information relevant to the simulated subject in a relatively short time. This study is an extended and complete version of an initial research done by Christian and Hansun and presents a prototype of a multi-agent shopping center simulation tool along with a fuzzy logic algorithm implemented in the system. Shopping centers and all their components are represented in a simulated 3D environment. The simulation tool was created using the Unity3D engine to build the 3D environment and to run the simulation. To model and simulate the behavior of agents inside the simulation, a fuzzy logic algorithm that uses the agents’ basic knowledge as input was built to determine the agents’ behavior inside the system and to simulate human behaviors as realistically as possible.

  3. Computer network prepared to handle massive data flow

    CERN Multimedia

    2006-01-01

    "Massive quantities of data will soon begin flowing from the largest scientific instrument ever built into an internationl network of computer centers, including one operated jointly by the University of Chicago and Indiana University." (2 pages)

  4. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Peisert, Sean [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Davis, CA (United States); Potok, Thomas E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jones, Todd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-03

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues included research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the

  5. High-speed low-complexity video coding with EDiCTius: a DCT coding proposal for JPEG XS

    Science.gov (United States)

    Richter, Thomas; Fößel, Siegfried; Keinert, Joachim; Scherl, Christian

    2017-09-01

    In its 71th meeting, the JPEG committee issued a call for low complexity, high speed image coding, designed to address the needs of low-cost video-over-ip applications. As an answer to this call, Fraunhofer IIS and the Computing Center of the University of Stuttgart jointly developed an embedded DCT image codec requiring only minimal resources while maximizing throughput on FPGA and GPU implementations. Objective and subjective tests performed for the 73rd meeting confirmed its excellent performance and suitability for its purpose, and it was selected as one of the two key contributions for the development of a joined test model. In this paper, its authors describe the design principles of the codec, provide a high-level overview of the encoder and decoder chain and provide evaluation results on the test corpus selected by the JPEG committee.

  6. Random migration processes between two stochastic epidemic centers.

    Science.gov (United States)

    Sazonov, Igor; Kelbert, Mark; Gravenor, Michael B

    2016-04-01

    We consider the epidemic dynamics in stochastic interacting population centers coupled by random migration. Both the epidemic and the migration processes are modeled by Markov chains. We derive explicit formulae for the probability distribution of the migration process, and explore the dependence of outbreak patterns on initial parameters, population sizes and coupling parameters, using analytical and numerical methods. We show the importance of considering the movement of resident and visitor individuals separately. The mean field approximation for a general migration process is derived and an approximate method that allows the computation of statistical moments for networks with highly populated centers is proposed and tested numerically. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Lessons learned: mobile device encryption in the academic medical center.

    Science.gov (United States)

    Kusche, Kristopher P

    2009-01-01

    The academic medical center is faced with the unique challenge of meeting the multi-faceted needs of both a modern healthcare organization and an academic institution, The need for security to protect patient information must be balanced by the academic freedoms expected in the college setting. The Albany Medical Center, consisting of the Albany Medical College and the Albany Medical Center Hospital, was challenged with implementing a solution that would preserve the availability, integrity and confidentiality of business, patient and research data stored on mobile devices. To solve this problem, Albany Medical Center implemented a mobile encryption suite across the enterprise. Such an implementation comes with complexities, from performance across multiple generations of computers and operating systems, to diversity of application use mode and end user adoption, all of which requires thoughtful policy and standards creation, understanding of regulations, and a willingness and ability to work through such diverse needs.

  8. A Dynamic and Interactive Monitoring System of Data Center Resources

    Directory of Open Access Journals (Sweden)

    Yu Ling-Fei

    2016-01-01

    Full Text Available To maximize the utilization and effectiveness of resources, it is very necessary to have a well suited management system for modern data centers. Traditional approaches to resource provisioning and service requests have proven to be ill suited for virtualization and cloud computing. The manual handoffs between technology teams were also highly inefficient and poorly documented. In this paper, a dynamic and interactive monitoring system for data center resources, ResourceView, is presented. By consolidating all data center management functionality into a single interface, ResourceView shares a common view of the timeline metric status, while providing comprehensive, centralized monitoring of data center physical and virtual IT assets including power, cooling, physical space and VMs, so that to improve availability and efficiency. In addition, servers and VMs can be monitored from several viewpoints such as clusters, racks and projects, which is very convenient for users.

  9. Synergies and Distinctions between Computational Disciplines in Biomedical Research: Perspective from the Clinical and Translational Science Award Programs

    Science.gov (United States)

    Bernstam, Elmer V.; Hersh, William R.; Johnson, Stephen B.; Chute, Christopher G.; Nguyen, Hien; Sim, Ida; Nahm, Meredith; Weiner, Mark; Miller, Perry; DiLaura, Robert P.; Overcash, Marc; Lehmann, Harold P.; Eichmann, David; Athey, Brian D.; Scheuermann, Richard H.; Anderson, Nick; Starren, Justin B.; Harris, Paul A.; Smith, Jack W.; Barbour, Ed; Silverstein, Jonathan C.; Krusch, David A.; Nagarajan, Rakesh; Becich, Michael J.

    2010-01-01

    Clinical and translational research increasingly requires computation. Projects may involve multiple computationally-oriented groups including information technology (IT) professionals, computer scientists and biomedical informaticians. However, many biomedical researchers are not aware of the distinctions among these complementary groups, leading to confusion, delays and sub-optimal results. Although written from the perspective of clinical and translational science award (CTSA) programs within academic medical centers, the paper addresses issues that extend beyond clinical and translational research. The authors describe the complementary but distinct roles of operational IT, research IT, computer science and biomedical informatics using a clinical data warehouse as a running example. In general, IT professionals focus on technology. The authors distinguish between two types of IT groups within academic medical centers: central or administrative IT (supporting the administrative computing needs of large organizations) and research IT (supporting the computing needs of researchers). Computer scientists focus on general issues of computation such as designing faster computers or more efficient algorithms, rather than specific applications. In contrast, informaticians are concerned with data, information and knowledge. Biomedical informaticians draw on a variety of tools, including but not limited to computers, to solve information problems in health care and biomedicine. The paper concludes with recommendations regarding administrative structures that can help to maximize the benefit of computation to biomedical research within academic health centers. PMID:19550198

  10. Stern-Center Potsdam; Stern-Center Potsdam

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1999-07-01

    The ''Stern-Center Potsdam'' is a star-shaped shopping mall in the city center. [German] Das Stern-Center in Potsdam bietet als Einkaufszentrum vor den Toren Berlins Platz fuer eine Vielzahl von Geschaeften. Die sternfoermige Gebaeudestruktur des Centers bildet den Mittelpunkt des Stadtviertels 'Am Stern'. (orig.)

  11. Requirements for SSC central computing staffing (conceptual)

    International Nuclear Information System (INIS)

    Pfister, J.

    1985-01-01

    Given a computation center with --10,000 MIPS supporting --1,000 users, what are the staffing requirements? The attempt in this paper is to list the functions and staff size required in a central computing or centrally supported computing complex. The organization assumes that although considerable computing power would exist (mostly for online) in the four interaction regions (IR) that there are functions/capabilities better performed outside the IR and in this model at a ''central computing facility.'' What follows is one staffing approach, not necessarily optimal, with certain assumptions about numbers of computer systems, media, networks and system controls, that is, one would get the best technology available. Thus, it is speculation about what the technology may bring and what it takes to operate it. From an end user support standpoint it is less clear, given the geography of an SSC, where and what the consulting support should look like and its location

  12. 8th Workshop on Computational Optimization

    CERN Document Server

    2016-01-01

    This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization 2015. It presents recent advances in computational optimization. The volume includes important real life problems like parameter settings for controlling processes in bioreactor, control of ethanol production, minimal convex hill with application in routing algorithms, graph coloring, flow design in photonic data transport system, predicting indoor temperature, crisis control center monitoring, fuel consumption of helicopters, portfolio selection, GPS surveying and so on. It shows how to develop algorithms for them based on new metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming and others. This research demonstrates how some real-world problems arising in engineering, economics, medicine and other domains can be formulated as optimization problems. .

  13. A 3-Month Randomized Controlled Pilot Trial of a Patient-Centered, Computer-Based Self-Monitoring System for the Care of Type 2 Diabetes Mellitus and Hypertension.

    Science.gov (United States)

    Or, Calvin; Tao, Da

    2016-04-01

    This study was performed to evaluate the effects of a patient-centered, tablet computer-based self-monitoring system for chronic disease care. A 3-month randomized controlled pilot trial was conducted to compare the use of a computer-based self-monitoring system in disease self-care (intervention group; n = 33) with a conventional self-monitoring method (control group; n = 30) in patients with type 2 diabetes mellitus and/or hypertension. The system was equipped with a 2-in-1 blood glucose and blood pressure monitor, a reminder feature, and video-based educational materials for the care of the two chronic diseases. The control patients were given only the 2-in-1 monitor for self-monitoring. The outcomes reported here included the glycated hemoglobin (HbA1c) level, fasting blood glucose level, systolic blood pressure, diastolic blood pressure, chronic disease knowledge, and frequency of self-monitoring. The data were collected at baseline and at 1-, 2-, and 3-month follow-up visits. The patients in the intervention group had a significant decrease in mean systolic blood pressure from baseline to 1 month (p computer-assisted and conventional disease self-monitoring appear to be useful to support/maintain blood pressure and diabetes control. The beneficial effects of the use of electronic self-care resources and support provided via mobile technologies require further confirmation in longer-term, larger trials.

  14. Modeling subsurface reactive flows using leadership-class computing

    Energy Technology Data Exchange (ETDEWEB)

    Mills, Richard Tran [Computational Earth Sciences Group, Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831-6015 (United States); Hammond, Glenn E [Hydrology Group, Environmental Technology Division, Pacific Northwest National Laboratory, Richland, WA 99352 (United States); Lichtner, Peter C [Hydrology, Geochemistry, and Geology Group, Earth and Environmental Sciences Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Sripathi, Vamsi [Department of Computer Science, North Carolina State University, Raleigh, NC 27695-8206 (United States); Mahinthakumar, G [Department of Civil, Construction, and Environmental Engineering, North Carolina State University, Raleigh, NC 27695-7908 (United States); Smith, Barry F, E-mail: rmills@ornl.go, E-mail: glenn.hammond@pnl.go, E-mail: lichtner@lanl.go, E-mail: vamsi_s@ncsu.ed, E-mail: gmkumar@ncsu.ed, E-mail: bsmith@mcs.anl.go [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439-4844 (United States)

    2009-07-01

    We describe our experiences running PFLOTRAN-a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media- on leadership-class supercomputers, including initial experiences running on the petaflop incarnation of Jaguar, the Cray XT5 at the National Center for Computational Sciences at Oak Ridge National Laboratory. PFLOTRAN utilizes fully implicit time-stepping and is built on top of the Portable, Extensible Toolkit for Scientific Computation (PETSc). We discuss some of the hurdles to 'at scale' performance with PFLOTRAN and the progress we have made in overcoming them on leadership-class computer architectures.

  15. Modeling subsurface reactive flows using leadership-class computing

    International Nuclear Information System (INIS)

    Mills, Richard Tran; Hammond, Glenn E; Lichtner, Peter C; Sripathi, Vamsi; Mahinthakumar, G; Smith, Barry F

    2009-01-01

    We describe our experiences running PFLOTRAN-a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media- on leadership-class supercomputers, including initial experiences running on the petaflop incarnation of Jaguar, the Cray XT5 at the National Center for Computational Sciences at Oak Ridge National Laboratory. PFLOTRAN utilizes fully implicit time-stepping and is built on top of the Portable, Extensible Toolkit for Scientific Computation (PETSc). We discuss some of the hurdles to 'at scale' performance with PFLOTRAN and the progress we have made in overcoming them on leadership-class computer architectures.

  16. A multigrid algorithm for the cell-centered finite difference scheme

    Science.gov (United States)

    Ewing, Richard E.; Shen, Jian

    1993-01-01

    In this article, we discuss a non-variational V-cycle multigrid algorithm based on the cell-centered finite difference scheme for solving a second-order elliptic problem with discontinuous coefficients. Due to the poor approximation property of piecewise constant spaces and the non-variational nature of our scheme, one step of symmetric linear smoothing in our V-cycle multigrid scheme may fail to be a contraction. Again, because of the simple structure of the piecewise constant spaces, prolongation and restriction are trivial; we save significant computation time with very promising computational results.

  17. University of Tennessee Center for Space Transportation and Applied Research (CSTAR)

    Science.gov (United States)

    1995-10-01

    The Center for Space Transportation and Applied Research had projects with space applications in six major areas: laser materials processing, artificial intelligence/expert systems, space transportation, computational methods, chemical propulsion, and electric propulsion. The closeout status of all these projects is addressed.

  18. University of Tennessee Center for Space Transportation and Applied Research (CSTAR)

    Science.gov (United States)

    1995-01-01

    The Center for Space Transportation and Applied Research had projects with space applications in six major areas: laser materials processing, artificial intelligence/expert systems, space transportation, computational methods, chemical propulsion, and electric propulsion. The closeout status of all these projects is addressed.

  19. Efficient management of data center resources for massively multiplayer online games

    NARCIS (Netherlands)

    Nae, V.; Iosup, A.; Podlipnig, S.; Prodan, R.; Epema, D.H.J.; Fahringer, T.

    2008-01-01

    Today's massively multiplayer online games (MMOGs) can include millions of concurrent players spread across the world. To keep these highly-interactive virtual environments online, a MMOG operator may need to provision tens of thousands of computing resources from various data centers. Faced with

  20. Nuclear safety research collaborations between the U.S. and Russian Federation International Nuclear Safety Centers

    International Nuclear Information System (INIS)

    Hill, D. J.; Braun, J. C.; Klickman, A. E.; Bougaenko, S. E.; Kabonov, L. P.; Kraev, A. G.

    2000-01-01

    The Russian Federation Ministry for Atomic Energy (MINATOM) and the US Department of Energy (USDOE) have formed International Nuclear Safety Centers to collaborate on nuclear safety research. USDOE established the US Center (ISINSC) at Argonne National Laboratory (ANL) in October 1995. MINATOM established the Russian Center (RINSC) at the Research and Development Institute of Power Engineering (RDIPE) in Moscow in July 1996. In April 1998 the Russian center became a semi-independent, autonomous organization under MINATOM. The goals of the center are to: Cooperate in the development of technologies associated with nuclear safety in nuclear power engineering; Be international centers for the collection of information important for safety and technical improvements in nuclear power engineering; and Maintain a base for fundamental knowledge needed to design nuclear reactors. The strategic approach is being used to accomplish these goals is for the two centers to work together to use the resources and the talents of the scientists associated with the US Center and the Russian Center to do collaborative research to improve the safety of Russian-designed nuclear reactors. The two centers started conducting joint research and development projects in January 1997. Since that time the following ten joint projects have been initiated: INSC databases--web server and computing center; Coupled codes--Neutronic and thermal-hydraulic; Severe accident management for Soviet-designed reactors; Transient management and advanced control; Survey of relevant nuclear safety research facilities in the Russian Federation; Computer code validation for transient analysis of VVER and RBMK reactors; Advanced structural analysis; Development of a nuclear safety research and development plan for MINATOM; Properties and applications of heavy liquid metal coolants; and Material properties measurement and assessment. Currently, there is activity in eight of these projects. Details on each of these